CN115794417A - Memory management method and device - Google Patents

Memory management method and device Download PDF

Info

Publication number
CN115794417A
CN115794417A CN202310052653.3A CN202310052653A CN115794417A CN 115794417 A CN115794417 A CN 115794417A CN 202310052653 A CN202310052653 A CN 202310052653A CN 115794417 A CN115794417 A CN 115794417A
Authority
CN
China
Prior art keywords
memory
target
capacity
mapping
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310052653.3A
Other languages
Chinese (zh)
Inventor
刘博格
郝宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Primitive Data Beijing Information Technology Co ltd
Original Assignee
Primitive Data Beijing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Primitive Data Beijing Information Technology Co ltd filed Critical Primitive Data Beijing Information Technology Co ltd
Priority to CN202310052653.3A priority Critical patent/CN115794417A/en
Publication of CN115794417A publication Critical patent/CN115794417A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application discloses a memory management method and device. Wherein, the method comprises the following steps: acquiring a target capacity memory to be allocated; determining a section of target virtual memory space with continuous addresses from a preset virtual memory space according to the target capacity; mapping a target capacity memory from the virtual memory space to a free page of a preset main memory file according to a target initial address of the target virtual memory space; and allocating the physical memory with the target capacity from the physical memory according to the mapping relation between the pages in the preset main memory file and the physical memory. The method at least solves the technical problem that the utilization rate of the memory space is low due to insufficient utilization of memory fragments in the memory management process.

Description

Memory management method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to a memory management method and apparatus.
Background
Memory management refers to the technology of allocating and using memory resources of a computer when software runs. The main objective is how to efficiently and quickly allocate and, when appropriate, release and reclaim memory resources. Modern operating systems all employ virtual memory technology. Virtual memory allows an application to think that it has a contiguous available complete address space. The mapping translation of virtual addresses to physical addresses is handled by the memory management unit MMU of the CPU. The MMU dynamically translates virtual addresses via page tables stored in memory. When an application applies for s bytes of memory, a contiguous virtual address space of size s that is unused is allocated to the application from the heap space. In the running process of the application program, memory space needs to be continuously applied and released, and a plurality of small gaps are generated among allocated memory areas in the heap space. These gaps are too small to be used in new memory applications resulting in wasted memory. Existing memory solutions commonly use fixed-size memory allocation units to eliminate gaps. The memory allocation unit is called grant. Each time the new memory allocation must be an integer multiple of the granularity. And meanwhile, an unused space in the granules needs to be recorded by using the idle linked list. Although the granules solve the memory gap problem, the free space inside the granules is still too small to be wasted. Meanwhile, although the application release of the operating system to the memory provides system call, due to the significant system call overhead, particularly the frequent disk exchange is needed when the physical memory is nearly exhausted, the user-mode management realizes the memory reuse and performance improvement, for example, mySQL uses the memory of a tree structure to manage and isolate function calls and memory use among different threads, and is a common application program development practice.
Disclosure of Invention
The embodiment of the application provides a memory management method and device, which are used for at least solving the technical problem of low utilization rate of a memory space caused by underutilization of memory fragments in a memory management process.
According to an aspect of an embodiment of the present application, there is provided a memory management method, including: acquiring a target capacity memory to be allocated; determining a section of target virtual memory space with continuous addresses from a preset virtual memory space according to the target capacity; mapping a target capacity memory from the virtual memory space to a free page of a preset main memory file according to a target initial address of the target virtual memory space; and allocating the physical memory with the target capacity from the physical memory according to the mapping relation between the pages in the preset main memory file and the physical memory.
Optionally, mapping the target capacity memory from the virtual memory space to a free page of a preset main memory file according to a target start address of the target virtual memory space, including: acquiring the target number of idle pages in a preset main memory file; under the condition that the target quantity is larger than a preset quantity threshold value, mapping a target capacity memory from a virtual memory space to a free page of a preset main memory file according to a first strategy; and under the condition that the target quantity is smaller than the preset quantity threshold value, mapping the target capacity memory from the virtual memory space to the idle page of the preset main memory file according to a second strategy.
Optionally, mapping the target capacity memory from the virtual memory space to a free page of a preset main memory file according to a first policy, including: acquiring the number of continuous free pages in a preset main memory file according to a first strategy; determining the storage space of the continuous idle pages according to the number of the continuous idle pages and the capacity of the idle pages; under the condition that the target capacity is larger than the storage space of the continuous free pages, mapping the virtual memory space with the first capacity in the target virtual memory space into the continuous free pages, wherein the first capacity is the same as the storage space of the continuous free pages; and updating the target capacity and the target quantity, and determining a mapping strategy according to the updated target capacity and the updated target quantity.
Optionally, in the case that the target capacity is smaller than the storage space of the consecutive free pages, the method further includes: determining a ratio of the target capacity to the capacity of the idle page; and rounding the value to obtain a first number, and distributing the first number of idle pages to map the target capacity.
Optionally, mapping the target capacity memory from the virtual memory space to a free page of a preset main memory file according to a second policy, including: applying for a memory allocation unit to a preset main memory file, wherein the memory allocation unit comprises a plurality of continuous idle pages; determining a second number of continuous free pages in the memory allocation unit; determining the number of the applied memory allocation units according to the target capacity and the second number; and mapping the target capacity memory from the virtual memory space to the continuous free pages in the applied memory allocation unit.
Optionally, after mapping the target capacity memory from the virtual memory space to the continuous free pages in the applied memory allocation unit, the method further includes: acquiring information of an idle page in a memory allocation unit, wherein the information of the idle page comprises position information of the idle page and offset of the idle page in a main memory file; and recording the target information into a free page record table.
Optionally, the mapping of the virtual memory space of the first capacity into the consecutive free pages includes: determining the initial address of the virtual memory space with the first capacity as the initial address of the target virtual memory space; the termination address of the virtual memory space with the first capacity is the sum of the starting address and the address length corresponding to the first capacity; and mapping the virtual memory space with the first capacity into continuous free pages according to the starting address and the ending address.
According to another aspect of the embodiments of the present application, there is also provided a memory management device, including: the acquisition module is used for acquiring a target capacity memory to be allocated; the determining module is used for determining a section of target virtual memory space with continuous addresses from a preset virtual memory space according to the target capacity; the first mapping module is used for mapping the target capacity memory from the virtual memory space to the idle page of the preset main memory file according to the target initial address of the target virtual memory space; and the second mapping module is used for allocating the physical memory with the target capacity from the physical memory according to the mapping relation between the pages in the preset main memory file and the physical memory.
According to another aspect of the embodiments of the present application, a nonvolatile storage medium is further provided, where a program is stored in the nonvolatile storage medium, and when the program runs, a device where the nonvolatile storage medium is located is controlled to execute the memory management method.
According to still another aspect of the embodiments of the present application, there is also provided a computer device, including: the memory management system comprises a memory and a processor, wherein the processor is used for operating a program stored in the memory, and the program executes the memory management method when running.
In the embodiment of the application, a target capacity memory to be allocated is obtained; determining a section of continuous target virtual memory space from preset virtual memory space according to the target capacity; mapping a target capacity memory from the virtual memory space to a free page of a preset main memory file according to a target initial address of the target virtual memory space; according to the method for distributing the physical memory with the target capacity from the physical memory according to the mapping relation between the pages in the preset main memory file and the physical memory, continuous target virtual memory space is distributed in the preset virtual memory space in advance, then the target virtual memory space is mapped to the free pages in the main memory file, and then the free pages in the physical memory are distributed according to the capacity of the target virtual memory space through the mapping relation between the main memory file and the physical memory, so that the purpose of fully utilizing fragment space in the memory is achieved, the technical effect of improving the utilization rate of the memory space is achieved, and the technical problem that the utilization rate of the memory space is low due to underutilized memory fragments in the memory management process is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a hardware structure of a computer terminal (or a mobile device) for a memory management method according to an embodiment of the present application;
FIG. 2 is a flow chart illustrating a memory management method according to the present application;
fig. 3 is a schematic diagram of an optional mapping relationship between a main storage file and a physical memory according to an embodiment of the present application;
FIG. 4 is a flow diagram illustrating an alternative mapping of virtual memory to physical memory via a host file according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating an alternative memory allocation process according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an alternative memory management device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
The method provided by the embodiment of the application can be executed in a mobile terminal, a computer terminal or a similar operation device. Fig. 1 shows a hardware structure block diagram of a computer terminal (or mobile device) for implementing the memory management method. As shown in fig. 1, the computer terminal 10 (or mobile device 10) may include one or more (shown as 102a, 102b, … …,102 n) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), a memory 104 for storing data, and a transmission module 106 for communication functions. In addition, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the memory management method in the embodiment of the present application, and the processor 102 executes various functional applications and data processing by running the software programs and modules stored in the memory 104, that is, implementing the memory management method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
In the foregoing operating environment, an embodiment of the present application provides a memory management method, as shown in fig. 2, the method includes the following steps:
step S202, obtaining a target capacity memory to be allocated;
step S204, determining a section of target virtual memory space with continuous addresses from preset virtual memory space according to the target capacity;
step S206, mapping the target capacity memory from the virtual memory space to an idle page of a preset main memory file according to the target initial address of the target virtual memory space;
step S208, allocating a physical memory with a target capacity from the physical memory according to the mapping relationship between the page in the preset main memory file and the physical memory.
Through the steps, the continuous target virtual memory space is pre-allocated in the preset virtual memory space, the target virtual memory space is mapped to the free page of the main memory file, and the free page in the physical memory is allocated according to the capacity of the target virtual memory space through the mapping relation between the main memory file and the physical memory, so that the purpose of fully utilizing the fragment space in the memory is achieved, the technical effect of improving the utilization rate of the memory space is realized, and the technical problem of low utilization rate of the memory space caused by underutilizing the memory fragments in the memory management process is solved.
It should be noted that the virtual memory remapping refers to changing the mapping relationship from the existing virtual address space to the physical address space. In modern computing, the memory address used by a program is called a virtual memory address, and the spatial address actually existing in hardware is called a physical memory address. The operating system allocates an independent set of virtual addresses to each process, and the addresses between the processes are not interfered with each other. The virtual address held by the process is converted into a physical address through the mapping relationship of a Memory Management Unit (MMU) in the CPU chip, and then the memory is accessed through the physical address. If a user can implement memory remapping, i.e., directly control the mapping relationship from virtual pages to physical pages, then many underlying data operations can be done very efficiently. For example: one common and time consuming operation on an array is to resize the array. Typically if the array space needs to be increased, time consuming physical copy operations, page allocation operations, or manual use of lists to manage the array (requiring the use of additional secondary searches) must be involved. For example, a container in C + + STL would apply for twice the size of the original space and then copy the old data into the new space. Using the memory remapping technique, only the first half of the virtual memory space with twice the original size needs to be mapped to the old physical memory space, and the second half needs to be mapped to the new physical memory space. This eliminates the time consuming data copy operation.
It should be further noted that the main memory file exists in the main memory, and commonly used are tmpfs (a file system) and hugetlbfs (a file system), which respectively refer to the small page object and the large page object in the shared memory, fig. 3 shows the mapping relationship between the main memory file and the physical memory, as shown in fig. 3, each page of the main memory file corresponds to a physical page, and the physical pages are not necessarily consecutive, for example: first page in main memory file
Figure SMS_1
Corresponding to page 19 in physical memory
Figure SMS_2
Second page
Figure SMS_3
Corresponding to page 27 in physical memory
Figure SMS_4
However, the main storage file system records the physical page address corresponding to each page, for example:
Figure SMS_5
the page address of
Figure SMS_6
. Thus, with the help of the main memory file system, data in the main memory file can be accessed using a continuous address offset.
Next, in order to create a self-defined virtual memory mapping relationship, a virtual memory address needs to be mapped into a main memory file, a target virtual memory space with a capacity of a target capacity s is created through mmap (a memory mapping method for memory), and is mapped into a preset main memory file f, a first parameter b is an initial address of the target virtual memory space, the parameter b starts from a whole page, and in later use, the existing mapping needs to be modified by transmitting an initial value of the existing virtual memory address to implement remapping of the virtual memory address.
Fig. 4 shows a flowchart of mapping the virtual memory to the physical memory through the main memory file, and as shown in fig. 4, the page with the virtual memory address b + i × p is mapped to the page with the main memory file offset i × p through mmap.
In an actual application scenario, first, an initial address of a memory (a target capacity memory to be allocated) applied by a user is determined, a virtual memory space (a target virtual memory space) with continuous addresses is determined according to the size (target capacity) of the applied memory, and then virtual memory pages of the target virtual memory space are sequentially mapped onto idle pages of a preset main memory file according to an idle page record table, as shown in fig. 5, three memory allocation units are present: the capacity of each memory allocation unit is 5 pages, each memory allocation unit is continuous on a preset main memory file, and pages are released continuously in the memory use process, for example: in FIG. 5
Figure SMS_8
Figure SMS_15
Figure SMS_16
Figure SMS_17
Figure SMS_18
And
Figure SMS_19
and recorded in a free page table, as shown in fig. 5, each memory allocation unit includes two free pages, and the first two pages of the target capacity memory are mapped to the page in the granule1
Figure SMS_20
And
Figure SMS_7
and relieve from
Figure SMS_9
And
Figure SMS_10
the previous virtual memory mapping is mapped to the residual three pages in the grant 2 according to the same method
Figure SMS_11
And
Figure SMS_12
in grant 3
Figure SMS_13
And
Figure SMS_14
in (1).
Step S202 to step S208 will be described below by way of specific examples.
In step S206, the method for mapping the target capacity memory from the virtual memory space to the free page of the preset main memory file according to the target start address of the target virtual memory space includes: acquiring the target number of idle pages in a preset main memory file; under the condition that the target quantity is larger than a preset quantity threshold value, mapping the target capacity memory from the virtual memory space to a free page of a preset main memory file according to a first strategy; and under the condition that the target quantity is smaller than the preset quantity threshold value, mapping the target capacity memory from the virtual memory space to the idle page of the preset main memory file according to a second strategy.
Specifically, for example, taking the target capacity as s to determine the starting address b in the virtual memory space in the target capacity, it is first ensured that the b to b + s-1 byte virtual memory address is not used.
It should be noted that the preset number threshold may be 0.
Under the condition that the target number m of idle pages in a preset main memory file is greater than 0, determining that the idle pages exist in the main memory file, performing virtual memory mapping, and mapping a target capacity memory from a virtual memory space to the idle pages of the preset main memory file according to a first strategy; and under the condition that the target number m of the idle pages in the preset main memory file is less than 0, determining that no idle page exists in the main memory file, and mapping the target capacity memory from the virtual memory space to the idle pages of the preset main memory file according to a second strategy.
Wherein the first policy is: acquiring the number of continuous free pages in a preset main memory file; determining the storage space of the continuous idle pages according to the number of the continuous idle pages and the capacity of the idle pages; under the condition that the target capacity is larger than the storage space of the continuous free pages, mapping the virtual memory space with the first capacity in the target virtual memory space into the continuous free pages, wherein the first capacity is the same as the storage space of the continuous free pages; and updating the target capacity and the target quantity, and determining a mapping strategy according to the updated target capacity and the updated target quantity.
Specifically, the free page record table is read in order, and it is found that there are k consecutive free pages where the offset f of the main memory file is off. At this time, whether the k idle pages can meet the target capacity memory allocation is judged, and if the size of each page is assumed to be p, if s-k × p >0, the description is not sufficient. At this time, all k free pages are allocated to the user through mmap system call, that is, virtual memory addresses from b to b + k × p are mapped to positions from main memory file off to off + k × p. And then updating the number m of the remaining free pages, the number s of the remaining unallocated bytes and the starting position b of the remaining virtual memory. And finally returning to the judgment whether the total free page number m is greater than 0 to continue the remapping step, wherein the first capacity is equal to the capacity of k free pages.
Determining the ratio of the target capacity to the capacity of the idle page when the target capacity is smaller than the storage space of the continuous idle page; and rounding the value to obtain a first number, and distributing the first number of idle pages to map the target capacity.
Specifically, if s-k × p <0 indicates that k idle pages can already meet the memory allocation requirement of the user at this time, an integer number of idle pages larger than s is allocated according to the target capacity, that is, the first number ⌈ s/p ⌉, and the number m of remaining idle pages is updated.
In an optional manner, mapping the target capacity memory from the virtual memory space to a free page of a preset main memory file according to a second policy specifically includes: applying for a memory management unit from a preset main memory file, wherein the memory allocation unit comprises a plurality of continuous idle pages; determining a second number of continuous free pages in the memory allocation unit; determining the number of the applied memory allocation units according to the target capacity and the second number; and mapping the target capacity memory from the virtual memory space to the continuous free pages in the applied memory allocation unit.
Specifically, under the condition that the number m of free pages is less than 0, a memory allocation unit is applied to the main memory file. According to the size of the target capacity s, assuming that each grant contains the second number g of pages, ⌈ s/(g × p) ⌉ grants need to be applied. And mapping the virtual memory address of b + ⌈ s/p ⌉ xp to the grant of the new application, and simultaneously adding unused ⌈ s/(g p) ⌉ g- ⌈ s/p ⌉ idle pages into the idle page record table. Acquiring target information of an idle page in a memory allocation unit, wherein the target information of the idle page comprises position information of the idle page and offset of the idle page in a main memory file; and recording the target information into a free page record table.
Mapping the virtual memory space of the first capacity to a free page in the main memory file can be realized by the following steps: determining the initial address of the virtual memory space with the first capacity as the initial address of the target virtual memory space; the termination address of the virtual memory space with the first capacity is the sum of the starting address and the address length corresponding to the first capacity; and mapping the virtual memory space with the first capacity into continuous free pages according to the starting address and the ending address.
After the memory allocation is completed, the method further comprises the steps of dynamically applying for and destroying the main memory file according to the running state of the application program, and being responsible for allocating and recycling the granules in the free page management module.
An embodiment of the present application provides a memory management device, as shown in fig. 6, including: an obtaining module 60, configured to obtain a target capacity memory to be allocated; a determining module 62, configured to determine a target virtual memory space with consecutive addresses from preset virtual memory spaces according to a target capacity; a first mapping module 64, configured to map the target capacity memory from the virtual memory space to a free page of a preset main memory file according to a target start address of the target virtual memory space; the second mapping module 66 is configured to allocate a physical memory with a target capacity from the physical memory according to a mapping relationship between a page in the preset main storage file and the physical memory.
A first mapping module 64 comprising: the strategy sub-module is used for acquiring the target number of the idle pages in the preset main memory file; under the condition that the target quantity is larger than a preset quantity threshold value, mapping a target capacity memory from a virtual memory space to a free page of a preset main memory file according to a first strategy; and under the condition that the target quantity is smaller than the preset quantity threshold value, mapping the target capacity memory from the virtual memory space to the idle page of the preset main memory file according to a second strategy.
A policy submodule, comprising: a first policy unit and a second policy unit;
the first strategy unit is used for acquiring the number of continuous idle pages in a preset main memory file according to a first strategy; determining the storage space of the continuous idle pages according to the number of the continuous idle pages and the capacity of the idle pages; under the condition that the target capacity is larger than the storage space of the continuous free pages, mapping the virtual memory space with the first capacity in the target virtual memory space into the continuous free pages, wherein the first capacity is the same as the storage space of the continuous free pages; and updating the target capacity and the target quantity, and determining a mapping strategy according to the updated target capacity and the updated target quantity.
The first strategy unit comprises a first subunit, and the first subunit is used for determining the ratio of the target capacity to the capacity of the idle page; and rounding the value to obtain a first number, and distributing the first number of idle pages to map the target capacity.
The second strategy unit is used for applying for a memory management unit from a preset main memory file, and the memory allocation unit comprises a plurality of continuous idle pages; determining a second number of continuous free pages in the memory allocation unit; determining the number of the applied memory allocation units according to the target capacity and the second number; and mapping the target capacity memory from the virtual memory space to the continuous free pages in the applied memory allocation unit.
A second policy unit comprising a recording subunit for: acquiring target information of an idle page in a memory allocation unit, wherein the target information of the idle page comprises position information of the idle page and offset of the idle page in a main memory file; and recording the target information into a free page record table.
The first mapping module 64 further includes: the mapping submodule is used for determining the initial address of the virtual memory space with the first capacity as the initial address of the target virtual memory space; the termination address of the virtual memory space with the first capacity is the sum of the starting address and the address length corresponding to the first capacity; and mapping the virtual memory space with the first capacity into continuous free pages according to the starting address and the ending address.
According to another aspect of the embodiments of the present application, a non-volatile storage medium is further provided, in which a program is stored in the non-volatile storage medium, and when the program runs, a device in which the non-volatile storage medium is located is controlled to execute the memory management method.
According to still another aspect of the embodiments of the present application, there is also provided a computer device, including: the memory and the processor are used for operating the program stored in the memory, wherein the program executes the memory management method.
It should be noted that each module in the memory management device may be a program module (for example, a program instruction set for implementing a certain specific function), or may be a hardware module, and in the latter, the module may be represented in the following form, but is not limited to the following form: the above modules are all represented by one processor, or the functions of the above modules are realized by one processor.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit may be a division of a logic function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (10)

1. A memory management method, comprising:
acquiring a target capacity memory to be allocated;
determining a section of target virtual memory space with continuous addresses from a preset virtual memory space according to the target capacity;
mapping the target capacity memory from the virtual memory space to a free page of a preset main memory file according to a target initial address of the target virtual memory space;
and distributing the physical memory with the target capacity from the physical memory according to the mapping relation between the pages in the preset main memory file and the physical memory.
2. The method of claim 1, wherein mapping the target capacity memory from the virtual memory space to a free page of a preset main memory file according to a target start address of the target virtual memory space comprises:
acquiring the target number of idle pages in the preset main memory file;
under the condition that the target quantity is larger than a preset quantity threshold value, mapping the target capacity memory from the virtual memory space to a free page of a preset main memory file according to a first strategy;
and under the condition that the target quantity is smaller than the preset quantity threshold value, mapping the target capacity memory from the virtual memory space to a free page of a preset main memory file according to a second strategy.
3. The method of claim 2, wherein mapping the target capacity memory from the virtual memory space to a free page of a preset main memory file according to a first policy comprises:
acquiring the number of continuous free pages in the preset main memory file according to the first strategy; determining the storage space of the continuous idle pages according to the number of the continuous idle pages and the capacity of the idle pages;
under the condition that the target capacity is larger than the storage space of the continuous free pages, mapping a virtual memory space with a first capacity in the target virtual memory space into the continuous free pages, wherein the first capacity is the same as the storage space of the continuous free pages;
and updating the target capacity and the target quantity, and determining a mapping strategy according to the updated target capacity and the updated target quantity.
4. The method of claim 3, wherein in the case that the target capacity is less than the storage space of the consecutive free pages, the method further comprises:
determining a ratio of the target capacity to the capacity of the idle page;
and rounding the ratio to obtain a first number, and distributing the first number of idle pages to map the target capacity.
5. The method of claim 2, wherein mapping the target capacity memory from the virtual memory space to a free page of a preset main memory file according to a second policy comprises:
applying for a memory allocation unit to the preset main memory file, wherein the memory allocation unit comprises a plurality of continuous idle pages;
determining a second number of continuous free pages in the memory allocation unit;
determining the number of the memory allocation units to be applied according to the target capacity and the second number;
and mapping the target capacity memory from the virtual memory space to the continuous free pages in the applied memory allocation unit.
6. The method of claim 5, wherein after mapping the target capacity memory from the virtual memory space into the consecutive free pages in the applied memory allocation unit, the method further comprises:
acquiring target information of an idle page in the memory allocation unit, wherein the target information of the idle page comprises position information of the idle page and offset of the idle page in the main memory file;
and recording the target information into a free page record table.
7. The method of claim 3, wherein the mapping of the first amount of virtual memory space into the consecutive free pages comprises:
determining the initial address of the virtual memory space with the first capacity as the initial address of the target virtual memory space;
the ending address of the virtual memory space with the first capacity is the sum of the starting address and the address length corresponding to the first capacity;
and mapping the virtual memory space with the first capacity into the continuous free pages according to the starting address and the ending address.
8. A memory management device, comprising:
the acquisition module is used for acquiring a target capacity memory to be allocated;
the determining module is used for determining a section of target virtual memory space with continuous addresses from a preset virtual memory space according to the target capacity;
the first mapping module is used for mapping the target capacity memory from the virtual memory space to a free page of a preset main memory file according to a target starting address of the target virtual memory space;
and the second mapping module is used for distributing the physical memory with the target capacity from the physical memory according to the mapping relation between the pages in the preset main memory file and the physical memory.
9. A non-volatile storage medium, wherein a program is stored in the non-volatile storage medium, and when the program runs, a device in which the non-volatile storage medium is located is controlled to execute the memory management method according to any one of claims 1 to 7.
10. A computer device, comprising: a memory and a processor for executing a program stored in the memory, wherein the program executes to perform the memory management method of any one of claims 1 to 7.
CN202310052653.3A 2023-02-02 2023-02-02 Memory management method and device Pending CN115794417A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310052653.3A CN115794417A (en) 2023-02-02 2023-02-02 Memory management method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310052653.3A CN115794417A (en) 2023-02-02 2023-02-02 Memory management method and device

Publications (1)

Publication Number Publication Date
CN115794417A true CN115794417A (en) 2023-03-14

Family

ID=85429545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310052653.3A Pending CN115794417A (en) 2023-02-02 2023-02-02 Memory management method and device

Country Status (1)

Country Link
CN (1) CN115794417A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117687933A (en) * 2023-12-20 2024-03-12 摩尔线程智能科技(北京)有限责任公司 Storage space allocation method, storage space allocation device, storage medium, storage device allocation apparatus, and storage space allocation program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6085296A (en) * 1997-11-12 2000-07-04 Digital Equipment Corporation Sharing memory pages and page tables among computer processes
JP2003233533A (en) * 2002-02-08 2003-08-22 Nippon Telegr & Teleph Corp <Ntt> Method and device of memory management, memory management program, and recording medium recording the program
US20060236063A1 (en) * 2005-03-30 2006-10-19 Neteffect, Inc. RDMA enabled I/O adapter performing efficient memory management
US20070118712A1 (en) * 2005-11-21 2007-05-24 Red Hat, Inc. Cooperative mechanism for efficient application memory allocation
US20150199280A1 (en) * 2014-01-14 2015-07-16 Nvidia Corporation Method and system for implementing multi-stage translation of virtual addresses
CN105095094A (en) * 2014-05-06 2015-11-25 华为技术有限公司 Memory management method and equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6085296A (en) * 1997-11-12 2000-07-04 Digital Equipment Corporation Sharing memory pages and page tables among computer processes
JP2003233533A (en) * 2002-02-08 2003-08-22 Nippon Telegr & Teleph Corp <Ntt> Method and device of memory management, memory management program, and recording medium recording the program
US20060236063A1 (en) * 2005-03-30 2006-10-19 Neteffect, Inc. RDMA enabled I/O adapter performing efficient memory management
US20070118712A1 (en) * 2005-11-21 2007-05-24 Red Hat, Inc. Cooperative mechanism for efficient application memory allocation
US20150199280A1 (en) * 2014-01-14 2015-07-16 Nvidia Corporation Method and system for implementing multi-stage translation of virtual addresses
CN105095094A (en) * 2014-05-06 2015-11-25 华为技术有限公司 Memory management method and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙媛 等: "操作系统原理及应用", 中央民族大学出版社 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117687933A (en) * 2023-12-20 2024-03-12 摩尔线程智能科技(北京)有限责任公司 Storage space allocation method, storage space allocation device, storage medium, storage device allocation apparatus, and storage space allocation program product

Similar Documents

Publication Publication Date Title
CN114115747B (en) Memory system and control method
CN110209490B (en) Memory management method and related equipment
JP5510556B2 (en) Method and system for managing virtual machine storage space and physical hosts
CN110663019A (en) File system for Shingled Magnetic Recording (SMR)
US11010079B2 (en) Concept for storing file system metadata within solid-stage storage devices
KR102077149B1 (en) Method for managing memory and apparatus thereof
CN109144406B (en) Metadata storage method, system and storage medium in distributed storage system
CN110928803B (en) Memory management method and device
CN115794417A (en) Memory management method and device
CN111031011A (en) Interaction method and device of TCP/IP accelerator
CN115080455A (en) Computer chip, computer board card, and storage space distribution method and device
US20230153236A1 (en) Data writing method and apparatus
CN115269450A (en) Memory cooperative management system and method
CN112256460A (en) Inter-process communication method and device, electronic equipment and computer readable storage medium
KR20170128012A (en) Flash-based storage and computing device
WO2021227789A1 (en) Storage space allocation method and device, terminal, and computer readable storage medium
US20110153691A1 (en) Hardware off-load garbage collection acceleration for languages with finalizers
CN113778688B (en) Memory management system, memory management method, and memory management device
JP2018515859A (en) Method and apparatus for accessing a file and storage system
US20140337572A1 (en) Noncontiguous representation of an array
CN115129459A (en) Memory management method and device
CN106021121B (en) Packet processing system, method and apparatus to optimize packet buffer space
CN108228496B (en) Direct memory access memory management method and device and master control equipment
US11016685B2 (en) Method and defragmentation module for defragmenting resources
CN113535597A (en) Memory management method, memory management unit and Internet of things equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20230314