CN114741337A - Page table releasing method and computing equipment - Google Patents
Page table releasing method and computing equipment Download PDFInfo
- Publication number
- CN114741337A CN114741337A CN202210323635.XA CN202210323635A CN114741337A CN 114741337 A CN114741337 A CN 114741337A CN 202210323635 A CN202210323635 A CN 202210323635A CN 114741337 A CN114741337 A CN 114741337A
- Authority
- CN
- China
- Prior art keywords
- memory
- page table
- page
- memory pages
- referenced
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 170
- 230000008569 process Effects 0.000 claims abstract description 100
- 230000006870 function Effects 0.000 claims description 12
- 230000004044 response Effects 0.000 claims description 11
- 238000013467 fragmentation Methods 0.000 claims description 2
- 238000006062 fragmentation reaction Methods 0.000 claims description 2
- 238000004891 communication Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 10
- 238000007726 management method Methods 0.000 description 10
- 238000013507 mapping Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000007723 transport mechanism Effects 0.000 description 2
- 230000007423 decrease Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0877—Cache access modes
- G06F12/0882—Page mode
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0891—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using clearing, invalidating or resetting means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5022—Mechanisms to release resources
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention discloses a page table releasing method and a computing device, wherein the method is executed in an operating system of the computing device, the computing device comprises an internal memory, the internal memory is suitable for providing memory pages, and the method comprises the following steps: responding to a request of a process for applying for memory pages, allocating the memory pages for the process, and increasing the number of the memory pages quoted by a page table corresponding to the process; responding to a request of releasing a memory page by a process, releasing the memory page, and reducing the number of memory pages referenced by a page table corresponding to the process; obtaining the number of memory pages currently referenced by a page table, and judging whether all the memory pages referenced by the page table are released or not according to the number of the memory pages currently referenced; and if it is determined that the memory pages referenced by the page table are all released, releasing the page table. According to the technical scheme of the invention, the page table can be prevented from occupying extra memory space, and the memory resource can be saved.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a page table releasing method and computing equipment.
Background
In an existing operating system, a sparse hash table (i.e., a page table) is usually used to manage mapping from a virtual memory to a physical memory, a Lazy policy is generally adopted to apply for the physical memory and the page table during page interruption, and a Lazy policy is also adopted to release the physical memory and the page table during virtual memory release.
With the more and more detailed requirements of the application on the memory management, under the framework of the virtual memory management, Linux provides madvise to manage the physical memory, applies for the physical memory based on wireless and releases the physical memory based on dontened. The page table is still managed by adopting a Lazy strategy, and after the physical memory is released based on DONTNEED, the release of the page table is still carried out when the virtual memory is released.
According to the prior art, for an application program with a large difference between a virtual memory (VIRT) and an occupied memory (RES), the occupied additional memory space can reach hundreds G due to the Lazy release policy adopted by the kernel page table, and the additional memory space is almost occupied by the unreleased page table pte, which inevitably causes great waste to the memory resources.
For this reason, a page table release method is required to solve the problems in the above-described scheme.
Disclosure of Invention
To this end, the present invention provides a page table walk method and computing device to solve or at least alleviate the above existing problems.
According to an aspect of the present invention, there is provided a page table releasing method, executed in an operating system of a computing device, the computing device including an internal memory therein, the internal memory being adapted to provide memory pages, the method comprising the steps of: responding to a request of a process for applying for memory pages, allocating the memory pages for the process, and increasing the number of the memory pages quoted by a page table corresponding to the process; responding to a request of releasing a memory page by a process, releasing the memory page, and reducing the number of memory pages referenced by a page table corresponding to the process; obtaining the number of memory pages currently referenced by a page table, and judging whether all the memory pages referenced by the page table are released or not according to the number of the memory pages currently referenced; and if the memory pages referenced by the page table are determined to be all released, releasing the page table.
Optionally, in the page table releasing method according to the present invention, further comprising the steps of: when the process lacks the memory pages, the memory pages are distributed for the process, and the number of the memory pages quoted by the page table corresponding to the process is updated.
Optionally, in the page table freeing method according to the present invention, further including the steps of: a reference count structure is built using a radix tree to describe the number of memory pages referenced by the page table based on the reference count structure.
Optionally, in the page table releasing method according to the present invention, further comprising the steps of: a reference counter is established in the count structure to count the number of memory pages referenced by the page table.
Optionally, in the page table releasing method according to the present invention, after allocating the memory page to the process, the method further includes the steps of: updating the page table corresponding to the process based on the physical memory address of the memory page; after releasing the memory pages, the method further comprises the following steps: and marking the physical memory address corresponding to the memory page as unallocated in a page table corresponding to the process.
Optionally, in the page table releasing method according to the present invention, the step of judging whether all the memory pages referenced by the page table are released according to the number of the memory pages currently referenced includes: and judging whether the number of the memory pages referenced by the page table is 0 or not, and if so, determining that all the memory pages referenced by the page table are released.
Optionally, in the page table releasing method according to the present invention, responding to a request for a memory page by a process includes: responding to a request of a process for applying for a memory page by utilizing a parameter WILLNEED of a madvise function; responding to a request for a process to release a memory page includes: in response to a request by a process to release a memory page using the parameter DONTNEED of the madvise function.
Optionally, in the page table release method according to the present invention, the page table is an pte table, and the page table includes a fragmentation page table.
According to an aspect of the invention, there is provided a computing device comprising: at least one processor; a memory storing program instructions configured to be executed by the at least one processor, the program instructions including instructions for performing the page table walk method as described above.
According to one aspect of the present invention, there is provided a readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to perform a page table walk method as described above.
According to the technical scheme of the present invention, a page table releasing method is provided, in which memory pages referred to by each page table are counted, when a process applies for a memory page or lacks a memory page, or requests to release a memory page, after an operation of allocating a memory page or releasing a memory page is performed, the number of memory pages referred to by the process in the page table is updated, and then whether all the memory pages referred to by the page table are released can be determined according to the number of memory pages of the page table, and if all the memory pages referred to by the page table are released, the page table is released. Therefore, the page table is released in time along with the whole release of the memory pages referenced by the page table, so that the page table is prevented from occupying extra memory space, and the memory resources are saved. According to the technical scheme of the invention, for the application program with a large difference between the virtual memory and the memory occupied, the page table in the kernel is released in time, so that the problem that excessive memory space is additionally occupied because the page table is not released in time in the running process of the application process can be solved, the running of an operating system and other processes is prevented from being influenced because the page table occupies more memory space, and the running stability of the system is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1 illustrates a schematic diagram of an operating system running in a computing device, according to one embodiment of the invention;
FIG. 2 illustrates a block diagram of a computing device 200, according to one embodiment of the invention;
FIG. 3 illustrates a flow diagram of a page table walk method 300 according to one embodiment of the invention;
FIG. 4 illustrates a flow diagram for allocating memory pages, according to one embodiment of the invention; and
FIG. 5 illustrates a flow diagram for freeing memory pages according to one embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The invention provides a page table releasing method and a computing device for executing the page table releasing method, aiming at the problem that an application program with a large difference between a virtual memory (VIRT) and a memory (RES) is not released in time and additionally occupies too much memory space in the running process because a page table is not released in time.
FIG. 1 shows a schematic diagram of an operating system running in a computing device according to one embodiment of the invention. As shown in FIG. 1, a computing device 200 includes a hardware layer, an operating system 120, and one or more processes 110.
Operating system 120 runs in computing device 200, and operating system 120 may provide a software execution environment for one or more processes 110. As shown in FIG. 1, process 110 runs on top of operating system 120. The kernel 125 is included in the operating system 120, and the kernel 125 is responsible for process management, memory management, file management (e.g., file storage space management, directory management, and file read/write management), device management (e.g., I/O requests, buffer management, and drivers), and the like.
The hardware layer may provide a hardware runtime environment for operating system 120 and processes 110 in the computing device. As shown in fig. 1, the hardware layer includes an internal memory 130. The internal memory 130 may provide physical memory pages for processes such that a kernel of the operating system 120 allocates physical memory pages (simply "memory pages") for one or more processes.
One or more processes 110 run based on a hardware layer and an operating system, it being noted that the invention is not limited to the number or variety of applications. In one embodiment of the invention, the process 110 may apply for a memory page or request to release a memory page from the kernel 125 of the operating system.
According to an embodiment of the present invention, the kernel 125 of the operating system 120 may allocate a memory page for a process in response to a request for the memory page by the process or an instruction lacking the memory page, and increase the number of memory pages referenced by a page table corresponding to the process. And responding to a request of releasing the memory pages by the process, releasing the memory pages, and reducing the number of the memory pages referenced by the page table corresponding to the process. And then, acquiring the number of memory pages currently referenced by the page table corresponding to the process, and judging whether all the memory pages referenced by the page table are released or not according to the number of the memory pages currently referenced. If it is determined that all memory pages referenced by the page table are freed, the page table is freed.
It should be noted that the page table, that is, the sparse hash table, is used to manage mapping of the virtual memory to the physical memory, and specifically, mapping of the virtual memory address to the physical memory address may be implemented based on the page table, so that the processor obtains the physical memory address from the page table based on the virtual memory address and accesses the corresponding physical memory page in the internal memory based on the physical memory address.
The memory page referenced by the page table is a physical memory page corresponding to the physical memory address currently in the allocated state in the page table. The number of memory pages referenced by the page table, i.e., the number of physical memory pages (physical memory addresses) in the page table that are currently in an allocated state.
In the embodiment of the present invention, the page table for releasing is specifically a pt (page table entry) table, and the pte table specifically refers to the last stage in the page table.
In one embodiment, the operating system 120 is adapted to perform the page table walk method 300 of the present invention. The page table release method 300 will be described in detail below.
FIG. 2 shows a block diagram of a computing device 200, according to one embodiment of the invention.
As shown in FIG. 2, in a basic configuration 202, a computing device 200 typically includes a system memory 206 and one or more processors 204. A memory bus 208 may be used for communication between the processor 204 and the system memory 206.
Depending on the desired configuration, the processor 204 may be any type of processor, including but not limited to: a microprocessor (UP), a microcontroller (UC), a digital information processor (DSP), or any combination thereof. The processor 204 may include one or more levels of cache, such as a level one cache 210 and a level two cache 212, a processor core 214, and registers 216. Example processor cores 214 may include Arithmetic Logic Units (ALUs), Floating Point Units (FPUs), digital signal processing cores (DSP cores), or any combination thereof. The example memory controller 218 may be used with the processor 204, or in some implementations the memory controller 218 may be an internal part of the processor 204.
Depending on the desired configuration, system memory 206 may be any type of memory, including but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. System memory 206 may include an operating system 220, one or more applications 222, and program data 224. The application 222 is actually a plurality of program instructions that direct the processor 204 to perform corresponding operations. In some embodiments, application 222 may be arranged to cause processor 204 to operate with program data 224 on an operating system.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in a manner that encodes information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, Radio Frequency (RF), microwave, Infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
In an embodiment in accordance with the invention, the operating system of the computing device 200 is configured to perform the page table walk method 300 in accordance with the invention. The kernel of the operating system of the computing device 200 contains a plurality of program instructions for executing the page table walk method 300 of the present invention, which may instruct the processor to execute the page table walk method 300 according to the present invention, so that the operating system of the computing device optimizes the walk policy for the page table by executing the page table walk method 300 of the present invention, and the page table is timely walked to save memory space.
For applications with a large difference between virtual memory (VIRT) and memory (RES), such as Vscode, desktop applications developed by Electron, and the like, the page table release method 300 according to the present invention can solve the problem of excessive memory space occupation caused by the fact that the page table is not released in time during the running process of the application process.
FIG. 3 shows a flow diagram of a page table walk method 300 according to one embodiment of the invention. The method 300 is suitable for execution in an operating system of a computing device, such as the computing device 200 described above. Included in the computing device 200 is internal memory, which may provide physical memory pages.
In an embodiment of the invention, one or more processes may run on the operating system, where each process may correspond to one or more page tables. The operating system may be implemented, for example, as a Linux operating system.
It should be noted that, in the embodiment, the page table releasing method 300 is specifically described only by taking the Linux operating system as an example. However, it should be understood that the page table walk method 300 of the present invention is not limited to the particular type of operating system that performs the method, and those skilled in the art will appreciate that the method can be implemented on other types of operating systems as well, such as a Windows operating system, without the need for inventive effort. Any kind of operating system capable of releasing the page table in time by the method of the present invention is within the protection scope of the present invention.
It should be noted that the page table, that is, the sparse hash table, is used to manage mapping of the virtual memory to the physical memory, and specifically, mapping of the virtual memory address to the physical memory address may be implemented based on the page table, so that the processor obtains the physical memory address from the page table based on the virtual memory address and accesses the corresponding physical memory page (hereinafter referred to as "memory page") in the internal memory based on the physical memory address.
In the page table release method 300 of the present invention, the page table to be released is a pt (page table entry) table, and the pte table specifically refers to the last stage in the page table.
It is further noted that the page tables may include full use page tables, full release page tables, and fragmented page tables (i.e., partially use partially released page tables). The page table walk method 300 of the present invention is applicable to the walk of fragmented page tables. In other words, the page tables in page table release method 300 may include fragmented page tables.
As shown in FIG. 3, the method 300 includes steps S310-S340.
In step S310, in response to a request for a memory page applied by one or more processes, a memory page is allocated to the process, and the number of memory pages referenced by a page table corresponding to the process is increased.
In addition, when the process lacks memory pages, the process is allocated with the memory pages, and the number of the memory pages referenced by the page table corresponding to the process is updated.
It should be noted that, in the embodiment of the present invention, the memory page referenced by the page table, that is, the physical memory page corresponding to the physical memory address currently in the allocated state in the page table. The number of memory pages referenced by the page table, i.e., the number of physical memory pages (physical memory addresses) in the page table that are currently in an allocated state.
In addition, after allocating a memory page for a process in response to a request for the memory page applied by the process, the kernel of the operating system updates the physical memory address corresponding to the memory page into the page table corresponding to the process, that is, adds the physical memory address and related information (for example, access permission information) of the allocated memory page to the page table corresponding to the process, so as to indicate that the memory page corresponding to the physical memory address is currently in an allocated state.
In one implementation, each process may utilize the parameter WILLNEED of the madvise function to apply for memory pages from the kernel of the operating system. The kernel of the operating system may allocate memory pages for processes in response to a request by one or more processes to apply for memory pages using the parameter of the madvise function, rolled.
In step S320, in response to a request for releasing a memory page by one or more processes, the memory page is released, and the number of memory pages referenced by the page table corresponding to the process is reduced.
It should be noted that, according to the Lazy policy for page table release, the corresponding page table is released only when the virtual memory is released. When the physical memory page is released, the page table corresponding to the memory page is not released, but only the physical memory address corresponding to the memory page in the page table is marked as an "unallocated" state, so that the physical memory address can be subsequently allocated to other processes to use the released memory page.
That is, in the embodiment of the present invention, after releasing the memory page in response to the request of releasing the memory page by the process, the kernel of the operating system further marks the physical memory address corresponding to the memory page as unallocated in the page table corresponding to the process.
In one implementation, each process may utilize the parameter DONTNEED of the madvise function to request the release of a memory page from the kernel of the operating system. The kernel of the operating system may release the memory page in response to a request by one or more processes to release the memory page using the parameter DONTNEED of the madvise function.
In step S330, the number of memory pages currently referenced by the page table corresponding to the process is obtained, and whether all the memory pages referenced by the page table are released is determined according to the number of memory pages currently referenced. Here, the operating system may obtain, in real time, the number of memory pages currently referenced by the page table corresponding to each process, and determine whether all the memory pages referenced by the page table are released according to the number of memory pages currently referenced.
Finally, in step S340, if it is determined that all the memory pages referenced by the page table are released, the page table is released.
Here, when determining whether all the memory pages referenced by the page table are released according to the number of the memory pages currently referenced, specifically, determining whether the number of the memory pages currently referenced by the page table is 0, and if the number of the memory pages currently referenced by the page table is 0, determining that all the memory pages referenced by the page table are released, and then releasing the page table. Otherwise, if the number of memory pages currently referenced by the page table is not 0, it is determined that all the memory pages referenced by the page table are not released, at this time, the releasing operation on the page table is ended, and the page table does not need to be released.
According to an embodiment of the present invention, a reference count structure may be established by using a radix tree (radix _ tree), so as to record and describe the number of memory pages referenced by a page table based on the reference count structure, and update the number of memory pages referenced by the page table in real time according to an allocation operation or a release operation on the memory pages. Here, the count structure is, for example, pte _ desc.
Further, a reference counter is established in the counting structure so as to count the number of memory pages referenced by the page table based on the reference counter. Specifically, the reference counter may be implemented as pte _ refcount [0,512 ].
In an embodiment, after allocating one memory page for a process in response to a request for a memory page applied by the process, when increasing the number of memory pages referenced by a page table corresponding to the process, the number of memory pages referenced by the page table may be increased by 1 specifically based on the reference count structure pte _ desc. Accordingly, after releasing one memory page in response to a request for a memory page applied by a process, when the number of memory pages referenced by a page table corresponding to the process is decreased, the number of memory pages referenced by the page table may be specifically decreased by 1 based on the reference count structure pte _ desc.
In this way, in step S330, the number of memory pages currently referenced by the page table corresponding to the process may be obtained from the count structure pte _ desc.
FIG. 4 illustrates a flow diagram for allocating memory pages, according to one embodiment of the invention. As shown in fig. 4, the kernel of the operating system may receive a request for a memory page or a lack of a memory page (pagefault) instruction sent by a process using the parameter of the madvise function, namely, wireed. Subsequently, the kernel of the operating system allocates memory pages for the process, and adds the reference count (i.e., the number of memory pages referenced by the page table) of the page table (pte table) corresponding to the process to the count structure pte _ desc.
FIG. 5 illustrates a flow diagram for freeing memory pages according to one embodiment of the invention. As shown in FIG. 5, a kernel of an operating system may receive a request to release a memory page sent by a process using parameter DONTNEED of the madvise function. Subsequently, the kernel of the operating system releases the memory pages, and decreases the reference count (i.e., the number of memory pages referenced by the page table) of the page table (pte table) corresponding to the process in the count structure pte _ desc.
Further, a determination is made as to whether the reference count to the page table (pte table) in pte _ desc is 0 (i.e., whether the number of memory pages currently referenced by the page table is 0 is determined), and if so, the pte table is released. If not, the release operation for the pte table ends.
According to the page table releasing method of the present invention, by counting the memory pages referred to by each page table, when a process applies for a memory page or lacks a memory page, or requests to release a memory page, after performing an operation of allocating a memory page or releasing a memory page, the number of memory pages referred to in the page table corresponding to the process is updated, and then it can be determined whether all the memory pages referred to by the page table are released according to the number of memory pages of the page table, and if all the memory pages referred to by the page table are released, the page table is released. Therefore, the page table is released in time along with the whole release of the memory pages referenced by the page table, so that the page table is prevented from occupying extra memory space, and the memory resources are saved. According to the technical scheme of the invention, for the application program with a large difference between the virtual memory and the memory occupied, the page table in the kernel is released in time, so that the problem that excessive memory space is additionally occupied because the page table is not released in time in the running process of the application process can be solved, the running of an operating system and other processes is prevented from being influenced because the page table occupies more memory space, and the running stability of the system is improved.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as removable hard drives, U.S. disks, floppy disks, CD-ROMs, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the mobile terminal will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to execute the page table walk method of the present invention according to instructions in the program code stored in the memory.
By way of example, and not limitation, readable media may comprise readable storage media and communication media. Readable storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with examples of this invention. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment, or alternatively may be located in one or more devices different from the device in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components in the embodiments may be combined into one module or unit or component, and furthermore, may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense with respect to the scope of the invention, as defined in the appended claims.
Claims (10)
1. A page table release method, performed in an operating system of a computing device including an internal memory therein, the internal memory adapted to provide memory pages, the method comprising the steps of:
responding to a request of a process for applying for a memory page, allocating the memory page for the process, and increasing the number of the memory pages referenced by a page table corresponding to the process;
responding to a request of releasing a memory page by a process, releasing the memory page, and reducing the number of memory pages referenced by a page table corresponding to the process;
obtaining the number of memory pages currently referenced by a page table, and judging whether all the memory pages referenced by the page table are released or not according to the number of the memory pages currently referenced; and
and releasing the page table if the memory pages referenced by the page table are all released.
2. The method of claim 1, further comprising the steps of:
when the process lacks memory pages, allocating the memory pages for the process, and updating the number of the memory pages referenced by the page table corresponding to the process.
3. The method of claim 1 or 2, further comprising the step of:
a reference count structure is built using a radix tree to describe the number of memory pages referenced by the page table based on the reference count structure.
4. The method of claim 3, further comprising the steps of:
a reference counter is established in the count structure to count the number of memory pages referenced by the page table.
5. The method of any one of claims 1-4,
after allocating the memory pages for the process, the method further comprises the following steps:
updating the page table corresponding to the process based on the physical memory address of the memory page;
after releasing the memory pages, the method further comprises the following steps:
and marking the physical memory address corresponding to the memory page as unallocated in a page table corresponding to the process.
6. The method according to any of claims 1-5, wherein the step of determining whether all memory pages referenced by the page table are released according to the number of currently referenced memory pages comprises:
and judging whether the number of the memory pages referenced by the page table is 0 or not, and if so, determining that all the memory pages referenced by the page table are released.
7. The method of any one of claims 1-6,
responding to a request of a process for a memory page comprises:
responding to a request of a process for applying for a memory page by utilizing a parameter WILLNEED of a madvise function;
responding to a request for a process to release a memory page includes:
in response to a request by a process to release a memory page using the parameter DONTNEED of the madvise function.
8. The method of any one of claims 1-7,
the page table is an pte table, the page table including a fragmentation page table.
9. A computing device, comprising:
at least one processor; and
a memory storing program instructions, wherein the program instructions are configured to be executed by the at least one processor, the program instructions comprising instructions for performing the method of any of claims 1-8.
10. A readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to perform the method of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210323635.XA CN114741337A (en) | 2022-03-29 | 2022-03-29 | Page table releasing method and computing equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210323635.XA CN114741337A (en) | 2022-03-29 | 2022-03-29 | Page table releasing method and computing equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114741337A true CN114741337A (en) | 2022-07-12 |
Family
ID=82276961
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210323635.XA Pending CN114741337A (en) | 2022-03-29 | 2022-03-29 | Page table releasing method and computing equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114741337A (en) |
-
2022
- 2022-03-29 CN CN202210323635.XA patent/CN114741337A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7246195B2 (en) | Data storage management for flash memory devices | |
US9213623B2 (en) | Memory allocation with identification of requesting loadable kernel module | |
US7971026B2 (en) | Information processing apparatus and access control method | |
EP3594807A1 (en) | Virtual disk file format conversion method and device | |
US11132291B2 (en) | System and method of FPGA-executed flash translation layer in multiple solid state drives | |
CN110928803B (en) | Memory management method and device | |
US20140040577A1 (en) | Automatic Use of Large Pages | |
CN110276454B (en) | System for machine learning and method of controlling the same and electronic system | |
CN114327917A (en) | Memory management method, computing device and readable storage medium | |
US8327111B2 (en) | Method, system and computer program product for batched virtual memory remapping for efficient garbage collection of large object areas | |
US10268592B2 (en) | System, method and computer-readable medium for dynamically mapping a non-volatile memory store | |
CN110990114A (en) | Virtual machine resource allocation method, device, equipment and readable storage medium | |
EP3131015B1 (en) | Memory migration method and device | |
CN113157290B (en) | Multi-system installation method, computing equipment and storage medium | |
US8458434B2 (en) | Unified virtual contiguous memory manager | |
CN113791873B (en) | Virtual machine creating method, computing device and storage medium | |
CN110347614B (en) | Storage space mapping algorithm, cache state machine, storage device, and storage medium | |
CN114741337A (en) | Page table releasing method and computing equipment | |
EP3819771B1 (en) | Data processing method and device, apparatus, and system | |
WO2023030173A1 (en) | Method for managing dynamic library and corresponding apparatus | |
CN115421927A (en) | Load balancing method, computing device and storage medium | |
CN114691549A (en) | File writing method and device and computing equipment | |
KR20150096177A (en) | Method for performing garbage collection and flash memory apparatus using the method | |
WO2016197947A1 (en) | Paging address space management method and controller | |
US8813075B2 (en) | Virtual computer system and method of installing virtual computer system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |