CN115562871A - Memory allocation management method and device - Google Patents

Memory allocation management method and device Download PDF

Info

Publication number
CN115562871A
CN115562871A CN202211332358.5A CN202211332358A CN115562871A CN 115562871 A CN115562871 A CN 115562871A CN 202211332358 A CN202211332358 A CN 202211332358A CN 115562871 A CN115562871 A CN 115562871A
Authority
CN
China
Prior art keywords
memory
physical memory
physical
idle
bandwidth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211332358.5A
Other languages
Chinese (zh)
Inventor
姚振国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Jingdong Technology Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Information Technology Co Ltd filed Critical Jingdong Technology Information Technology Co Ltd
Priority to CN202211332358.5A priority Critical patent/CN115562871A/en
Publication of CN115562871A publication Critical patent/CN115562871A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45575Starting, stopping, suspending or resuming virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation

Abstract

The invention discloses a method and a device for memory allocation management, and relates to the technical field of computers. One embodiment of the method comprises: responding to a page fault exception generated when a virtual machine process accesses a user space, distributing a physical memory for the user space, and acquiring a memory type identifier of the physical memory; under the condition that the memory type identifier is a preset identifier, judging whether the physical memory is an idle memory, and carrying out zero clearing operation on the physical memory under the condition that the physical memory is not the idle memory so as to enable the physical memory to become the idle memory; and the read bandwidth of the physical memory of the preset identifier is greater than the write bandwidth. The method and the device can greatly improve the creating and starting speed of the virtual machine based on the memory with the read bandwidth larger than the write bandwidth.

Description

Memory allocation management method and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for memory allocation management.
Background
In the prior art, when a kernel allocates a memory, page table mapping and memory allocation are usually performed first, and then a clear operation is performed on the allocated memory. For the memory with the reading bandwidth larger than the writing bandwidth, due to the limitation of the writing bandwidth, the zero clearing operation occupies a large amount of time, so that the creating and starting time of the virtual machine is increased. This phenomenon is particularly evident in large-memory virtual machines, and even if the concurrency of page table mapping and memory allocation is increased, it is difficult to effectively increase the speed of virtual machine creation and startup.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for memory allocation management, which can determine whether an allocated physical memory is a memory with a read bandwidth larger than a write bandwidth by setting a memory type identifier; by further judging whether the allocated physical memory is a free memory under the condition that the allocated physical memory is a preset identification memory, performing zero clearing operation on the physical memory under the condition that the allocated physical memory is not the free memory, and not performing zero clearing operation under the condition that the allocated physical memory is the free memory, the creating and starting speed of the virtual machine based on the memory with the read bandwidth larger than the write bandwidth can be greatly improved.
To achieve the above object, according to an aspect of the embodiments of the present invention, there is provided a method for memory allocation management, including:
responding to a page fault exception generated when a virtual machine process accesses a user space, distributing a physical memory for the user space, and acquiring a memory type identifier of the physical memory;
under the condition that the memory type identifier is a preset identifier, judging whether the physical memory is an idle memory, and carrying out zero clearing operation on the physical memory under the condition that the physical memory is not the idle memory so as to enable the physical memory to become the idle memory; and the read bandwidth of the physical memory of the preset identifier is greater than the write bandwidth.
Optionally, the method further comprises: before allocating a physical memory to the user space, setting a memory type field for a physical memory node, and writing the memory type identifier in the memory type field according to the memory type of the physical memory in a kernel drive initialization stage.
Optionally, the method further comprises: and performing zero clearing operation on the physical memory under the condition that the memory type identifier is not a preset identifier, so that the physical memory becomes an idle memory.
Optionally, a partner system is used to allocate physical memory for the user space.
Optionally, the virtual machine process processes the data packet using a data plane development kit.
According to a second aspect of the embodiments of the present invention, there is provided an apparatus for memory allocation management, including:
the memory allocation module is used for responding to page fault exception generated when the virtual machine process accesses the user space, allocating a physical memory for the user space and acquiring a memory type identifier of the physical memory;
the memory clear module is used for judging whether the physical memory is an idle memory or not under the condition that the memory type identifier is a preset identifier, and carrying out clear operation on the physical memory under the condition that the physical memory is not an idle memory so as to enable the physical memory to become an idle memory; and the reading bandwidth of the physical memory with the preset identifier is greater than the writing bandwidth.
Optionally, the apparatus further comprises an initialization module configured to: before allocating a physical memory to the user space, setting a memory type field for a physical memory node, and writing the memory type identifier in the memory type field according to the memory type of the physical memory in a kernel drive initialization stage.
Optionally, the memory clear module is further configured to: and performing zero clearing operation on the physical memory under the condition that the memory type identifier is not a preset identifier, so that the physical memory becomes an idle memory.
Optionally, the memory allocation module allocates physical memory for the user space by using a partner system.
Optionally, the virtual machine process processes the data packet using a data plane development suite.
According to a third aspect of the embodiments of the present invention, there is provided an electronic device for memory allocation management, including: one or more processors; a storage device, configured to store one or more programs, where when the one or more programs are executed by the one or more processors, the one or more processors implement the method provided by the first aspect of the embodiment of the present invention.
According to a fourth aspect of embodiments of the present invention, there is provided a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the method provided by the first aspect of embodiments of the present invention.
One embodiment of the above invention has the following advantages or benefits: by setting the memory type identifier, whether the allocated physical memory is a memory with a reading bandwidth larger than a writing bandwidth can be judged; by further judging whether the allocated physical memory is an idle memory under the condition that the allocated physical memory is a preset identification memory, performing zero clearing operation on the physical memory under the condition that the allocated physical memory is not the idle memory, and not performing zero clearing operation under the condition that the allocated physical memory is the idle memory, the creating and starting speed of the virtual machine based on the read bandwidth being greater than the write bandwidth memory can be greatly improved.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
fig. 1 is a schematic diagram illustrating a main flow of a memory allocation management method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating physical memory registration in an alternative embodiment of the invention;
FIG. 3 is a schematic diagram of the creation and launching of a QEMU virtual machine in the prior art;
FIG. 4 is a schematic diagram of the creation and launching of a QEMU virtual machine in some embodiments of the invention;
FIG. 5 is a diagram illustrating major blocks of an apparatus for memory allocation management according to an embodiment of the present invention;
FIG. 6 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 7 is a schematic block diagram of a computer system suitable for use in implementing a terminal device or server of an embodiment of the invention.
Detailed Description
Exemplary embodiments of the invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Generally, a CPU (Central Processing Unit/Processor) can access all peripheral devices connected to an address bus through the address bus, including a physical Memory, an IO (Input/Output) device, and the like, but an access address issued from the CPU is not a physical address of the peripheral devices on the address bus, but a virtual address, and a MMU (Memory Management Unit) converts the virtual address into a physical address and then issues the physical address from the address bus, and a translation relationship between the virtual address and the physical address on the MMU needs to be created, and a process of creating the translation relationship is a page table mapping process. When a mapping from a virtual address to a physical address is not created or a write cannot be made to the corresponding physical memory, the MMU notifies the CPU of the occurrence of a page fault exception. By allocating physical memory for the virtual address, the problem of page fault exception can be solved. After creating a page table map and mapping a corresponding physical memory to a virtual address space, in order to ensure the operating stability of the system, a clear operation is usually performed on the allocated physical memory, and the clear operation is an operation of writing all memories at corresponding addresses into 0. The corresponding physical memory can be converted into a free memory through the cleaning operation, so that the writing operation can be performed.
For the memory with the reading bandwidth larger than the writing bandwidth, due to the limitation of the writing bandwidth, the zero clearing operation occupies a large amount of time, so that the creating and starting time of the virtual machine is increased. This phenomenon is particularly evident in large-memory virtual machines, and even if the concurrency of page table mapping and memory allocation is increased, it is difficult to effectively increase the speed of virtual machine creation and startup.
In view of the above, according to an aspect of the embodiments of the present invention, a method for memory allocation management is provided.
Fig. 1 is a schematic diagram illustrating a main flow of a memory allocation management method according to an embodiment of the present invention. As shown in fig. 1, the method for memory allocation management according to the embodiment of the present invention includes:
step S101, responding to a page fault exception generated when a virtual machine process accesses a user space, distributing a physical memory for the user space, and acquiring a memory type identifier of the physical memory;
step S102, judging whether the memory type identification is a preset identification; skipping to step S103 if the memory type identifier is a preset identifier, otherwise skipping to step S105;
step S103, judging whether the physical memory is an idle memory; skipping to step S104 when the physical memory is not a free memory, otherwise ending the process;
step S105, performing a clear operation on the physical memory to make the physical memory become an idle memory.
In the embodiment of the invention, the memory type identifier is set for the memory which can be allocated to the virtual machine process, wherein the memory type identifier corresponding to the memory with the reading bandwidth larger than the writing bandwidth is a preset identifier. In an actual application process, before allocating a physical memory to the user space, a memory type field may be set for a physical memory node, and in a kernel driver initialization stage, the memory type identifier may be written in the memory type field according to a memory type of the physical memory. Illustratively, in a memory node that can be allocated to a virtual machine process, the type of memory is indicated by adding a feature field (feature) of a node (a node of numa) in numa (non-uniform memory access technology). For example, a memory _ type field is added to the/systems/devices/system/node directory of the memory node. When a kmem (kernel driver) driver is initialized, if the physical memory allocated for mapping the user space is a memory with a read bandwidth larger than a write bandwidth, the field is set to be equal to the option, and if the physical memory allocated for mapping the user space is a memory with a read bandwidth smaller than or equal to the write bandwidth, the field is set to be equal to the dram. Fig. 2 is a flowchart illustrating physical memory registration according to an alternative embodiment of the invention. As shown in fig. 2, the node registration process of the memory with the read bandwidth larger than the write bandwidth includes: step S201, initializing a memory; step S202, setting the memory type identifier of the memory node as option; in step S203, a memory node is registered.
In the embodiment of the invention, whether the allocated physical memory is a memory with a reading bandwidth larger than a writing bandwidth can be judged by setting the memory type identifier; by further judging whether the allocated physical memory is an idle memory under the condition that the allocated physical memory is a preset identification memory, performing zero clearing operation on the physical memory under the condition that the allocated physical memory is not the idle memory, and not performing zero clearing operation under the condition that the allocated physical memory is the idle memory, the creating and starting speed of the virtual machine based on the read bandwidth being greater than the write bandwidth memory can be greatly improved.
If the memory type identifier of the physical memory allocated to the user space is not the preset identifier, whether to perform the clear operation may be determined in the manner of steps S103 to S105, or the clear operation may be directly performed on the physical memory without determining, so that the physical memory becomes the free memory. For the physical memory without the preset identifier, the write bandwidth is greater than or equal to the read bandwidth, and the write bandwidth is relatively high, so the speed of the zero clearing operation is relatively high, the logical judgment in the step S103 is not performed on the physical memory, but the zero clearing operation is directly performed, and the efficiency of memory allocation management can be improved.
The determination method of the physical memory allocated to each user space may be selectively determined according to an actual situation, and optionally, a partner system is adopted to allocate the physical memory to the user space. The partner system is a Linux kernel memory distribution system, and takes 4K pages as a management unit. Considering that a part of data structure is very small when the system operates, the embodiment of the present invention may also use a slab (a memory allocation mechanism) to allocate physical memory for the user space, so as to reduce the memory allocation granularity.
The virtual machine process processes the Data packets using a Data Plane Development Kit (DPDK). The DPDK technology can greatly improve the forwarding performance of the system.
The following describes an exemplary method for memory allocation management according to an embodiment of the present invention with reference to fig. 3 and fig. 4. For the memory with the reading bandwidth larger than the writing bandwidth, the 3D Xpoint technology can be used, and the storage speed and the storage density of the memory are greatly improved, so that the durability of the memory is improved, and the cost is reduced. At present, a Linux kernel can expand an internal AD mode (app direct mode, in which system software can control and manage an internal memory) with a read bandwidth larger than a write bandwidth into a single numa node for use by using a kmem driver.
QEMU (Quick EMUlato, a virtual machine simulation software, which can complete the simulation of a virtual machine in cooperation with a KVM and can be applied to a cloud computing scenario) virtual machines can allocate memories in a large-page manner, and when a large page of a QEMU virtual machine is assigned to a numa node where a memory with a read bandwidth larger than a write bandwidth is located. The QEMU virtual machine can then use that memory to the virtual machine guest operating system in the form of virtual normal memory. FIG. 3 is a schematic diagram of the creation and launching of a QEMU virtual machine in the prior art. When the QEMU virtual machine uses the DPDK scheme, because the DPDK and the virtual machine share a memory, QEMU needs to allocate all memories in advance, and the corresponding operation is touch pages. And the kernel performs page table mapping and memory allocation in the QEMU commands pages stage, and finally performs zero clearing operation on the memory. Because the read-write bandwidth of the memory with the read bandwidth larger than the write bandwidth is inconsistent, the read bandwidth is many times of the write bandwidth. When a large-memory type virtual machine such as a 1T memory is encountered, a kernel occupies a large amount of time when performing memory zero clearing operation, and due to the limitation of write bandwidth, even if the concurrency of touch pages is increased, the speed of the memory zero clearing operation cannot be increased. In this case the creation and start-up time of the virtual machine is increased many times compared to normal memory.
The embodiment of the invention optimizes the starting speed of the cloud host of the large memory using the read bandwidth larger than the write bandwidth by improving the process of the kernel for cleaning the memory with the read bandwidth larger than the write bandwidth aiming at the characteristics of the memory with the read bandwidth larger than the write bandwidth and the mode of the existing kernel for distributing the memory. FIG. 4 is a schematic diagram of the creation and launching of a QEMU virtual machine in some embodiments of the invention. Referring to fig. 4, the technical solution of the present embodiment includes two parts:
first, whether the memory is a memory with a read bandwidth larger than a write bandwidth is indicated by increasing the feature of the numa node. Specifically, a memory _ type field is added under a/system/devices/system/node directory, when a kmem drive is initialized, a memory field with a reading bandwidth larger than a writing bandwidth is set to be option, and a common memory node field is dram;
secondly, when the kernel partner system allocates the page to the user space, before zero clearing, whether the memory _ type field of the numa node is equal to option is judged. If yes, the memory of the page is read first and compared with 0. If all 0 s are detected, the method directly returns to the user without zero clearing operation. And if not all are 0, performing zero clearing operation. If the memory is a common memory, the zero clearing operation is directly carried out as in the previous flow.
In this embodiment, a feature is added to a numa node to determine whether a physical memory allocated to a user space is a memory whose read bandwidth is greater than a write bandwidth, if the read bandwidth is greater than the write bandwidth, a page is first read and determined, and if all the read bandwidths are 0, no clear operation is performed. Therefore, the creation and starting speed of the memory cloud host with the reading bandwidth larger than the writing bandwidth can be greatly improved under the condition that the page content is zero.
According to a second aspect of the embodiments of the present invention, there is provided an apparatus for implementing the above method.
Fig. 5 is a schematic diagram illustrating major modules of an apparatus for memory allocation management according to an embodiment of the present invention. As shown in fig. 5, the apparatus 500 for memory allocation management includes:
the memory allocation module 501, in response to a page fault exception generated when a virtual machine process accesses a user space, allocates a physical memory for the user space, and acquires a memory type identifier of the physical memory;
a memory clear module 502, configured to, when the memory type identifier is a preset identifier, determine whether the physical memory is an idle memory, and perform a clear operation on the physical memory when the physical memory is not an idle memory, so that the physical memory becomes an idle memory; and the read bandwidth of the physical memory of the preset identifier is greater than the write bandwidth.
Optionally, the apparatus further comprises an initialization module configured to: before allocating a physical memory to the user space, setting a memory type field for a physical memory node, and writing the memory type identifier in the memory type field according to the memory type of the physical memory in a kernel drive initialization stage.
Optionally, the memory clear module is further configured to: and performing zero clearing operation on the physical memory under the condition that the memory type identifier is not a preset identifier, so that the physical memory becomes an idle memory.
Optionally, the memory allocation module allocates physical memory for the user space by using a partner system.
Optionally, the virtual machine process processes the data packet using a data plane development suite.
According to a third aspect of the embodiments of the present invention, there is provided an electronic device for memory allocation management, including: one or more processors; the storage device is configured to store one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to implement the method provided by the first aspect of the embodiments of the present invention.
According to a fourth aspect of embodiments of the present invention, there is provided a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the method provided by the first aspect of embodiments of the present invention.
Fig. 6 illustrates an exemplary system architecture 600 of a memory allocation management method or apparatus to which embodiments of the invention may be applied.
As shown in fig. 6, the system architecture 600 may include terminal devices 601, 602, 603, a network 604, and a server 605. The network 604 serves as a medium for providing communication links between the terminal devices 601, 602, 603 and the server 605. Network 604 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
A user may use the terminal devices 601, 602, 603 to interact with the server 605 via the network 604 to receive or send messages or the like. The terminal devices 601, 602, 603 may have installed thereon various communication client applications, such as shopping applications, web browser applications, search applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 601, 602, 603 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 605 may be a server providing various services, such as a background management server (for example only) providing support for shopping websites browsed by users using the terminal devices 601, 602, 603. The backend management server may analyze and perform other processing on the received data such as the product information query request, and feed back a processing result (for example, target push information, product information — just an example) to the terminal device.
It should be noted that the memory allocation management method provided in the embodiment of the present invention is generally executed by the server 605, and accordingly, the memory allocation management apparatus is generally disposed in the server 605.
It should be understood that the number of terminal devices, networks, and servers in fig. 6 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 7, shown is a block diagram of a computer system 700 suitable for use with a terminal device implementing an embodiment of the present invention. The terminal device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU) 701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the system 700 are also stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including components such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that the computer program read out therefrom is mounted in the storage section 708 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program performs the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 701.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor comprises a memory allocation module and a memory zero clearing module. The names of the modules do not limit the modules themselves in some cases, for example, the memory allocation module may also be described as a "module performing a clear operation on the physical memory".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: responding to a page fault exception generated when a virtual machine process accesses a user space, distributing a physical memory for the user space, and acquiring a memory type identifier of the physical memory; under the condition that the memory type identifier is a preset identifier, judging whether the physical memory is an idle memory, and carrying out zero clearing operation on the physical memory under the condition that the physical memory is not the idle memory so as to enable the physical memory to become the idle memory; and the reading bandwidth of the physical memory with the preset identifier is greater than the writing bandwidth.
According to the technical scheme of the embodiment of the invention, whether the allocated physical memory is a memory with a reading bandwidth larger than a writing bandwidth can be judged by setting the memory type identifier; by further judging whether the allocated physical memory is a free memory under the condition that the allocated physical memory is a preset identification memory, performing zero clearing operation on the physical memory under the condition that the allocated physical memory is not the free memory, and not performing zero clearing operation under the condition that the allocated physical memory is the free memory, the creating and starting speed of the virtual machine based on the memory with the read bandwidth larger than the write bandwidth can be greatly improved.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A method for memory allocation management, comprising:
responding to a page fault exception generated when a virtual machine process accesses a user space, distributing a physical memory for the user space, and acquiring a memory type identifier of the physical memory;
under the condition that the memory type identifier is a preset identifier, judging whether the physical memory is an idle memory, and carrying out zero clearing operation on the physical memory under the condition that the physical memory is not the idle memory so as to enable the physical memory to become the idle memory; and the read bandwidth of the physical memory of the preset identifier is greater than the write bandwidth.
2. The method of claim 1, wherein the method further comprises: before allocating a physical memory to the user space, setting a memory type field for a physical memory node, and writing the memory type identifier in the memory type field according to the memory type of the physical memory in a kernel drive initialization stage.
3. The method of claim 1, wherein the method further comprises: and performing zero clearing operation on the physical memory under the condition that the memory type identifier is not a preset identifier, so that the physical memory becomes an idle memory.
4. The method of claim 1, wherein a buddy system is employed to allocate physical memory for the user space.
5. The method of claim 1, wherein the virtual machine process processes the data packets using a data plane development kit.
6. An apparatus for memory allocation management, comprising:
the memory allocation module is used for responding to page fault exception generated when the virtual machine process accesses the user space, allocating a physical memory for the user space and acquiring a memory type identifier of the physical memory;
the memory clear module is used for judging whether the physical memory is an idle memory or not under the condition that the memory type identifier is a preset identifier, and carrying out clear operation on the physical memory under the condition that the physical memory is not an idle memory so as to enable the physical memory to become an idle memory; and the reading bandwidth of the physical memory with the preset identifier is greater than the writing bandwidth.
7. The apparatus of claim 6, wherein the apparatus further comprises an initialization module to: before allocating a physical memory to the user space, setting a memory type field for a physical memory node, and writing the memory type identifier in the memory type field according to the memory type of the physical memory in a kernel drive initialization stage.
8. An electronic device for memory allocation management, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
9. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN202211332358.5A 2022-10-28 2022-10-28 Memory allocation management method and device Pending CN115562871A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211332358.5A CN115562871A (en) 2022-10-28 2022-10-28 Memory allocation management method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211332358.5A CN115562871A (en) 2022-10-28 2022-10-28 Memory allocation management method and device

Publications (1)

Publication Number Publication Date
CN115562871A true CN115562871A (en) 2023-01-03

Family

ID=84768192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211332358.5A Pending CN115562871A (en) 2022-10-28 2022-10-28 Memory allocation management method and device

Country Status (1)

Country Link
CN (1) CN115562871A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116185910A (en) * 2023-04-25 2023-05-30 北京壁仞科技开发有限公司 Method, device and medium for accessing device memory and managing device memory

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116185910A (en) * 2023-04-25 2023-05-30 北京壁仞科技开发有限公司 Method, device and medium for accessing device memory and managing device memory
CN116185910B (en) * 2023-04-25 2023-07-11 北京壁仞科技开发有限公司 Method, device and medium for accessing device memory and managing device memory

Similar Documents

Publication Publication Date Title
US11093148B1 (en) Accelerated volumes
WO2017107414A1 (en) File operation method and device
US11119942B2 (en) Facilitating access to memory locality domain information
US11132290B2 (en) Locality domain-based memory pools for virtualized computing environment
US20100088448A1 (en) Virtual computing accelerator and program downloading method for server-based virtual computing
WO2020259289A1 (en) Resource allocation method and apparatus, electronic device and storage medium
US20190281112A1 (en) System and method for orchestrating cloud platform operations
CN106817388B (en) Method and device for acquiring data by virtual machine and host machine and system for accessing data
JP6774971B2 (en) Data access accelerator
CN109144619B (en) Icon font information processing method, device and system
EP3304294A1 (en) Method and system for allocating resources for virtual hosts
CN115309511B (en) Xen-based data interaction method and device, storage medium and electronic equipment
US11237761B2 (en) Management of multiple physical function nonvolatile memory devices
US10691590B2 (en) Affinity domain-based garbage collection
US20200371827A1 (en) Method, Apparatus, Device and Medium for Processing Data
CN115562871A (en) Memory allocation management method and device
CN107329798B (en) Data replication method and device and virtualization system
US20180157595A1 (en) Optimizing memory mapping(s) associated with network nodes
US11755496B1 (en) Memory de-duplication using physical memory aliases
US9251101B2 (en) Bitmap locking using a nodal lock
CN113127430B (en) Mirror image information processing method, mirror image information processing device, computer readable medium and electronic equipment
US11281612B2 (en) Switch-based inter-device notational data movement system
WO2016070641A1 (en) Data storage method and device, and data reading method and device
CN117749813A (en) Data migration method based on cloud computing technology and cloud management platform
CN116938634A (en) Data access method and related device based on virtual sub-network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination