CN115586973A - Process address space management method and device based on dynamic memory allocation technology - Google Patents

Process address space management method and device based on dynamic memory allocation technology Download PDF

Info

Publication number
CN115586973A
CN115586973A CN202211502874.8A CN202211502874A CN115586973A CN 115586973 A CN115586973 A CN 115586973A CN 202211502874 A CN202211502874 A CN 202211502874A CN 115586973 A CN115586973 A CN 115586973A
Authority
CN
China
Prior art keywords
memory
address space
linear
dynamic
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211502874.8A
Other languages
Chinese (zh)
Inventor
杨贻宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Feiqi Network Technology Co ltd
Original Assignee
Shanghai Feiqi Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Feiqi Network Technology Co ltd filed Critical Shanghai Feiqi Network Technology Co ltd
Priority to CN202211502874.8A priority Critical patent/CN115586973A/en
Publication of CN115586973A publication Critical patent/CN115586973A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the invention provides a process address space management method and a device based on a dynamic memory allocation technology, wherein the method comprises the following steps: dividing a dynamic mapping address space of a process into a plurality of mutually disjoint memory areas, wherein each memory area is provided with a plurality of linear areas, and each memory area corresponds to a CPU (Central processing Unit) kernel; and organizing and managing the plurality of linear zones based on a lock-free skip table, wherein the lock-free skip table comprises a plurality of layers of linked lists, each layer of linked list comprises nodes corresponding to the indexes of at least one linear zone, and in the two adjacent layers of linked lists, the upper layer of linked list is a subset of the lower layer of linked list. The scheme of the invention improves the expandability of the process address space and greatly improves the data processing speed and efficiency.

Description

Process address space management method and device based on dynamic memory allocation technology
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for managing a process address space based on a dynamic memory allocation technique.
Background
The performance of the current operating system is greatly improved along with the increase of the number of CPU cores of a central processing unit, the process management is mainly the process address space centralization management, and the problem that the kernel performance can not be elastically expanded along with the increase of the number of the CPU cores occurs.
Disclosure of Invention
The invention provides a process address space management method and device based on a dynamic memory allocation technology. The parallel efficiency of many cores can be fully released and exerted, and the data processing speed and efficiency are greatly improved.
To solve the above technical problem, the embodiments of the present invention provide the following solutions:
a process address space management method based on a dynamic memory allocation technology comprises the following steps:
dividing a dynamic mapping address space of a process into a plurality of mutually disjoint memory areas, wherein each memory area is provided with a plurality of linear areas, and each memory area corresponds to a CPU (Central processing Unit) kernel;
and organizing and managing the plurality of linear zones based on a lock-free skip table, wherein the lock-free skip table comprises a plurality of layers of linked lists, each layer of linked list comprises nodes corresponding to the indexes of at least one linear zone, and in the two adjacent layers of linked lists, the upper layer of linked list is a subset of the lower layer of linked list.
Optionally, the method for managing a process address space based on a dynamic memory allocation technique further includes:
and when receiving the call of the multithread application program of the upper layer, providing the dynamic mapping address space as a shared memory for the multithread application program.
Optionally, the dynamic mapping address space of the process includes an area of the memory mapping segment mmap; the method further comprises the following steps:
after the operating system calls a memory mapping section mmap operation, a new linear area is created;
inserting the new linear region into the dynamically mapped address space.
Optionally, the method for managing a process address space based on a dynamic memory allocation technique further includes:
deleting the memory mapping in the dynamic mapping address space of the process by calling a memory mapping deleting function; the memory mapping deleting function deletes the memory mapping by calling a deleting executing function according to the first parameter and the second parameter; the first parameter is a starting address of the memory map, and the second parameter is a length of the memory map.
Optionally, the linear region is represented by a linear region descriptor, and the linear region descriptor stores: the size of the interval of the linear area, the starting and ending addresses of the interval, the attribute of the interval and the information of the nodes of the red and black trees to which the interval belongs; each linear region descriptor holds a node in the hop table.
Optionally, the method for managing a process address space based on a dynamic memory allocation technique further includes:
receiving an access request to a target linear zone, wherein the access request carries an index of the target linear zone;
searching a node corresponding to the index according to the index in a first layer linked list in the lock-free jump table, and if the node is searched, returning a search result; otherwise, jumping to the second layer chain table for continuous searching, and returning the searching result obtained by searching in the Nth layer chain table, wherein N is greater than or equal to 2.
Optionally, the method for managing a process address space based on a dynamic memory allocation technique further includes:
when a thread deletes a node from an object, pointing to a memory space occupied by the deleted node through a risk pointer;
and recovering the deleted memory space according to the risk pointer.
An embodiment of the present invention further provides a device for managing a process address space, including:
the system comprises a dividing module, a processing module and a processing module, wherein the dividing module is used for dividing a dynamic mapping address space of a process into a plurality of mutually disjoint memory areas, each memory area is provided with a plurality of linear areas, and each memory area corresponds to a CPU (Central processing Unit) kernel;
and the management module is used for organizing and managing the plurality of linear zones based on a lock-free jump table, the lock-free jump table comprises a plurality of layers of linked lists, each layer of linked list comprises nodes corresponding to the indexes of at least one linear zone, and in the two adjacent layers of linked lists, the upper layer of linked list is a subset of the lower layer of linked list.
Embodiments of the present invention also provide a computing device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which when executed by the processor implements the steps of the method as described above.
Embodiments of the present invention also provide a computer-readable storage medium storing instructions that, when executed on a computer, cause the computer to perform the method as described above.
The scheme of the invention at least comprises the following beneficial effects:
according to the scheme, the dynamic mapping address space of the process is divided into a plurality of mutually disjoint memory areas, each memory area is provided with a plurality of linear areas, and each memory area corresponds to one CPU core; organizing and managing the plurality of linear zones based on a lock-free skip table, wherein the lock-free skip table comprises a plurality of layers of linked lists, each layer of linked list comprises nodes corresponding to the indexes of at least one linear zone, and in the two adjacent layers of linked lists, the upper layer of linked list is a subset of the lower layer of linked list; the multi-core process address space management based on the lock-free skip list is realized, the linear region of the memory interval managed by the lock-free skip list dynamic data structure can be dynamically expanded in parallel, the problem that the performance of the kernel cannot be expanded flexibly when the number of the CPU cores of the current mainstream operating system is increased is solved, the performance bottleneck of the operating system under the existing architecture is broken through, the multi-core parallel efficiency can be fully released and played, and the data processing speed and efficiency are greatly improved.
Drawings
Fig. 1 is a schematic flowchart of a process address space management method based on a dynamic memory allocation technique according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a memory segment according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a process address space management system based on dynamic memory allocation technology according to an embodiment of the present invention;
FIG. 4 is a diagram of a skip list according to an embodiment of the present invention;
fig. 5 is a schematic block diagram of a process address space management apparatus based on a dynamic memory allocation technique according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
As shown in fig. 1, the present invention provides a method for managing a process address space based on a dynamic memory allocation technique, where the method includes:
step 11, dividing a dynamic mapping address space of a process into a plurality of mutually disjoint memory areas, wherein each memory area is provided with a plurality of linear areas, and each memory area corresponds to a CPU (Central processing Unit) kernel;
and 12, organizing and managing the plurality of linear zones based on a lock-free jump table, wherein the lock-free jump table comprises a plurality of layers of linked lists, each layer of linked list comprises nodes corresponding to the indexes of at least one linear zone, and in the two adjacent layers of linked lists, the upper layer of linked list is a subset of the lower layer of linked list.
In this embodiment, a process is the minimum unit for resource allocation by an operating system, and a memory is an essential resource for process operation. The operating system typically allocates an exclusive memory space to each process, where the exclusive memory space is a virtual memory space allocated by the operating system to the process, and the virtual memory is mapped to the physical memory when actually used. Each time a process accesses a certain address (virtual address) of its memory space, the operating system needs to translate the address into an actual physical memory address.
The operating system divides the whole physical memory into a plurality of pages, the size of the page is generally 4K, the implementation of different operating systems may be different, and then when the memory allocation is performed on the process, the page is taken as a unit. The operating system virtual memory to physical memory mapping table is called a page table.
As shown in fig. 2, in the embodiment of the present invention, the virtual address space can be divided into two parts: a kernel-state virtual address space and a user-state virtual address space; the kernel-mode virtual address space maps information of code segments, data segments and the like of the kernel and is isolated from the address space of the user-mode virtual address. In the embodiment of the invention, the dynamic mapping address space division of the process is carried out in the memory mapping section of the user-state virtual address space.
The virtual address space is realized by combining segmentation and paging. Segmentation is to divide the memory into segments (segments), each Segment can have a different length, and unused space in the virtual address space is not mapped into the physical memory, so the operating system does not allocate the physical memory to the Segment.
The kernel can allocate a small amount of physical memory for the just created process, and the kernel allocates the physical memory for the process as required as the process runs and uses the memory continuously. That is, although the address space is in the same range as the physical memory size, not all of the space is mapped to physical memory.
For example, in a 32-bit operating system, its virtual address is 32 bits long, so its virtual address space ranges from 2 32 And =4GB. For example, the Linux system divides an address space by a ratio of 3.
The virtual memory address space layout of the Linux system process is shown in fig. 2, and the meaning of each segment is as follows: text segment (Text): also called code segment, the process loads the code of the program into the physical memory when starting, and the text segment is mapped to the physical memory.
Data segment (Data): the method comprises the steps of explicitly initializing global variables and static variables of a program, namely initialized global variables (including static global variables) and static local variables with initial values different from 0, wherein the data are determined before the program is actually run and can be loaded into a memory in advance and stored.
Uninitialized data (BSS): the method comprises the steps of uninitialized global variables and static variables, wherein the values of the variables can be determined only after a program is actually run and assigned, and only the memory address and the required size of the program need to be recorded at the beginning of program loading.
Stack (Stack): the memory segment is located at the top of the user space, can be dynamically increased and contracted, and is composed of Stack Frames (Stack Frames), each time a process calls a function, the process allocates one Stack frame for the function, and local variables, parameter values and return values of the function are stored in the Stack Frames. Stack frames are cleaned up when functions return. Here, the compiler will put the function parameters into registers to optimize the program, and only the parameters that are not put down in registers are saved using stack frames. Because the data in the stack strictly obeys the FIFO (first-in first-out) sequence, only a simple pointer is needed to point to the top of the stack, and the pushing and popping processes are very quick and accurate.
Heap (Heap): as with the stack, for run-time memory allocation; the heap is used for storing data with the lifetime independent of function call. The memory address of the stack grows downwards and the memory address of the heap grows upwards.
Memory Mapping segment (Memory Mapping): between the stack and the heap there is a memory mapped section. The kernel maps the contents of the file directly to the memory through this segment, and the process can request this mapping through mmap system calls. Memory mapping is a convenient and efficient way of file I/O, so it is also used to load dynamic libraries.
Kernel segment (Kernel): the mapping in the virtual address space of each process exists in the kernel of the operating system during operation. The segment of memory is only accessible by the kernel and cannot be accessed by the user process.
Code, data and the like of the kernel of the operating system are mapped to the kernel area; the process executable image (code and data) is mapped to the user area of the virtual memory.
The dynamic mapping address space of the process is a Memory mapping section and is divided into a plurality of mutually disjoint Memory regions (Memory regions), each Memory Region is provided with a plurality of linear regions, and each Memory Region corresponds to one CPU core, so that for an operating system with a plurality of CPU cores, each core corresponds to one Memory Region and does not interfere with each other, and the problem that the performance of the cores cannot be expanded flexibly due to the increase of the number of the CPU cores of the operating system is solved.
When a user mode requests a dynamic memory, an actual physical page frame is not immediately obtained, but only the right of use of a new linear address interval is obtained, and the linear address interval can become a part of a process address space and is called a linear area; the linear region is represented by a linear region descriptor holding: the size of the linear region, the starting and ending addresses of the region, the attribute of the region and the information of the nodes of the red and black trees to which the region belongs;
linked list and red-black tree structure of linear zone: the mmap pointed linear area linked list is used for traversing the address space of the whole process;
the red and black tree mm _ rb is used to locate in which linear region in the process address space a given linear address falls.
In this embodiment, as shown in fig. 3, for each independent memory area, a highly concurrent lock-free jump table is used for organizing and managing all linear areas in a memory interval by a dynamically expanded DS (Dynamic Scalability) in a process memory address space in an operating system kernel.
The bottom layer design of the DS can ensure that different threads cannot be serialized even when linear regions in the same memory interval are modified, the design can further improve the flexible expansion of a process address space, and meanwhile, the problem of memory access risks caused by a high-concurrency data structure realized by a lock-free algorithm can be solved.
In the embodiment of the invention, the DS in the process memory address space in the kernel of the operating system is dynamically expanded, the resources in the process address space are distributed according to the number of the system kernels, the linear region of the memory interval managed by the dynamic data structure without the lock skip list can be dynamically expanded in parallel, and the expandability of the process address space can be further improved.
In an optional embodiment of the present invention, the method for managing a process address space based on a dynamic memory allocation technique may further include:
and step 13, when receiving the call of the multithread application program of the upper layer, providing the dynamic mapping address space as a shared memory for the multithread application program.
In this embodiment, the DS still provides an abstraction of shared memory for the upper multi-threaded application, and multiple threads sharing address space can operate linear regions in different memory intervals in parallel, so that shared resource conflicts rarely occur.
In an optional embodiment of the present invention, the dynamic mapping address space of the process includes an area of the memory mapping segment mmap; the above method may further comprise:
step 14, after the operating system calls the mmap operation of the memory mapping section, a new linear area is created;
step 15, inserting the new linear region into the dynamic mapping address space.
In this embodiment, a specific implementation manner includes: the dynamic mapping address space of the process of the Linux system is divided into a plurality of independent linear areas, such as code of an executable file of a code Segment (Text Segment) mapping process, and a static variable of a Data Segment (Data Segment) mapping process executable file. The Linux kernel represents one of the linear zones by using a linear zone descriptor (vm _ area _ struct).
The linear area descriptor stores information such as the size of the linear area, the start and end addresses of the linear area, the attribute (R/W/X) of the linear area, and the node of the black-red tree to which the linear area belongs. The linear region descriptor is the most important information of the linear region. Any operation associated with the linear region will first find the red-black tree in the address space of the process and then modify the linear region descriptor. The memory area between the process stack and the stack is a dynamic mapping area. The dynamic mapping area is an area for calling mmap operation by the system and is also an area which can be frequently modified in the process running process. For example, each time the kernel performs a system call mmap operation, a new linear region is created and inserted into the address space tree of the process.
The DS divides the dynamically mapped address space in the process address space into several Memory regions (Memory regions) that are disjoint to each other. Except for the memory intervals such as code segments and data segments, the size of each memory interval is still about 2500GB under the environment of 32 cores. Each memory interval belongs to a processor, and each time a thread running on the processor modifies the address space of a process, the linear region in the memory interval to which the processor belongs is modified. This ensures that threads running on multiple processors rarely conflict in contention when concurrently accessing the address space of a process.
In an optional embodiment of the present invention, the method for managing a process address space based on a dynamic memory allocation technique may further include:
step 16, deleting the memory mapping in the dynamic mapping address space of the process by calling a memory mapping deletion function munmap; the memory mapping deleting function Munmap deletes the memory mapping by calling a deleting executing function do _ Munmap according to the first parameter and the second parameter; the first parameter is a starting address of the memory map, and the second parameter is a length of the memory map.
In an optional embodiment of the present invention, each linear region descriptor stores one node in the skip table, that is, the index of each linear region corresponds to a node in the skip table in a one-to-one manner.
In an optional embodiment of the present invention, the method for managing a process address space based on a dynamic memory allocation technique further includes:
step 17, receiving an access request to a target linear area, wherein the access request carries an index of the target linear area;
step 18, searching a node corresponding to the index according to the index in the first layer linked list in the lock-free jump table, and if the node is searched, returning a search result; otherwise, jumping to the second layer chain table for continuous searching, and returning a searching result obtained by searching in the Nth layer chain table, wherein N is greater than or equal to 2.
As shown in fig. 4, the lock-free skip list L is a multi-layer structure, each layer of which can be regarded as a lock-free single-direction linked list, and the elements in the upper linked list are subsets of the elements in the lower linked list, and the number of elements in each layer is half of the probability of the elements in the lower layer. Thus, in an ideal situation, the skip list can support
Figure DEST_PATH_IMAGE001
The search efficiency of (2). The skip table implementation of the DS provides lock-free insertion, deletion, and lookup methods.
The elements in each layer of linked list of the lock-free skip list L are ordered, and when the elements are searched from the top layer of the structure, when the value to be searched is greater than the maximum value in the current linked list, the elements are skipped to the lower linked list until the elements are found;
for example, if a target node corresponding to the target linear area 12 is searched, searching is performed from the first layer linked list, since 12 is greater than 8, searching is performed by directly jumping to a node on the right side of 8 in the second layer linked list, and 12 is greater than 10, searching is performed by directly jumping to a node on the right side of 10 in the third layer linked list, and the target linear area 12 can be searched through three times of comparison and query; it can be seen that the lookup performance of the data structure such as the skip list is much faster than that of the general linked list data structure, and the lookup efficiency is very high. The insertion and deletion of the nodes can be efficiently performed.
The algorithm is efficient because the number of lockless jumping skin layers is usually the square of the number of linear regions in the memory area, and is usually not too large. The DS technology ensures that an operating system has better elasticity in a future large-scale many-core environment through the advanced data structures and design concepts, so that the parallel execution efficiency of programs is improved.
For each individual memory region, the DS organizes all linear regions of the memory region using an independent high concurrency skip table. The benefits of using a jump table instead of the conventional red and black tree organization linear region are three points. First, similar to the red and black trees, the skip list is also provided
Figure 830619DEST_PATH_IMAGE001
Time complexity of (d). Therefore, using the skip list to manage the memory range descriptors also ensures good system performance. Second, unlike the red and black trees, the skip list in the DS can guarantee that multiple threads operate on any node of the list at the same time. Therefore, in a future large-scale many-core environment, the concurrency performance provided by the jump table can greatly improve the expandability of the kernel process address space of the operating system. Thirdly, compared with the complex tree self-balancing adjustment of red and black trees, the design implementation of the jump table is much simplified. This can reduce much for maintaining large scale complex system softwarePersonnel and maintenance costs.
In an optional embodiment of the present invention, the method for managing a process address space based on a dynamic memory allocation technique may further include:
step 19, when the thread deletes a node from the object, pointing to the memory occupied by the deleted node through the risk pointer;
and 20, recovering the deleted memory according to the risk pointer.
In this embodiment, the DS prevents insecure access to the memory by using a secure memory reclamation method based on the risk pointer. And the memory management based on the risk pointer allows the memory to be reused arbitrarily after being recycled. The memory management method is wait-free. Core operations require only a single word of memory read and write access. This approach also provides a lock-free solution to the ABA problem. For the management of pointers, attention is usually paid to how to recycle the memory occupied by deleted objects. In the case of lock-based objects, when a thread deletes a node from an object, it is easy to ensure that no other thread will access the node memory at a later time before it is reused or reallocated.
The time complexity of the secure memory recovery is
Figure 610357DEST_PATH_IMAGE002
Where M represents the number of layers of the skip list and N represents the number of threads in the system. And the safe memory is recycled and reused, so that unsafe memory access caused by a data structure without a lock skip list is compensated.
The embodiment of the invention solves the problem that the performance of the kernel cannot be expanded flexibly when the number of the cores of the CPU is increased in the current mainstream operating system, breaks through the performance bottleneck of the operating system under the existing architecture, can fully release and play the parallel efficiency of the cores, and greatly improves the data processing speed and efficiency.
As shown in fig. 5, an embodiment of the present invention further provides a process address space management apparatus 50 based on a dynamic memory allocation technique, including:
the dividing module 51 is configured to divide a dynamic mapping address space of a process into a plurality of mutually disjoint memory regions, where each memory region has a plurality of linear regions, and each memory region corresponds to a CPU core;
the management module 52 is configured to organize and manage the linear regions based on a lock-free skip table, where the lock-free skip table includes multiple layers of linked lists, each layer of linked list includes a node corresponding to an index of at least one linear region, and an upper layer of linked lists in two adjacent layers of linked lists is a subset of a lower layer of linked list.
Optionally, the management module 52 is further configured to, when receiving a call of an upper-layer multithread application program, provide the dynamic mapping address space as a shared memory to the multithread application program.
Optionally, the dynamic mapping address space of the process includes an area of the memory mapping segment mmap; further comprising:
after the operating system calls a memory mapping section mmap operation, a new linear area is created;
inserting the new linear region into the dynamically mapped address space.
Optionally, the management module 52 is further configured to delete the memory mapping in the dynamic mapping address space of the process by calling a memory mapping deletion function; the memory mapping deleting function deletes the memory mapping by calling a deleting executing function according to the first parameter and the second parameter; the first parameter is a starting address of the memory map, and the second parameter is a length of the memory map.
Optionally, the linear region is represented by a linear region descriptor, and the linear region descriptor holds: the size of the interval of the linear zone, the starting and ending addresses of the interval, the attribute of the interval and the information of the nodes of the red and black trees to which the interval belongs; each linear region descriptor holds a node in the hop table.
Optionally, the management module 52 is further configured to receive an access request to a target linear area, where the access request carries an index of the target linear area;
searching a node corresponding to the index according to the index in a first layer linked list in the lock-free skip list, and if the node is searched, returning a searching result; otherwise, jumping to the second layer chain table for continuous searching, and returning a searching result obtained by searching in the Nth layer chain table, wherein N is greater than or equal to 2.
Optionally, the management module 52 is further configured to, when a thread deletes a node from an object, point to a memory space occupied by the deleted node through a risk pointer;
and recovering the deleted memory space according to the risk pointer.
The device is a device corresponding to the method, and all implementation manners in the method embodiment are applicable to the device embodiment, so that the same technical effect can be achieved.
Embodiments of the present invention also provide a computing device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which when executed by the processor implements the steps of the method as described above. All the implementation manners in the above method embodiment are applicable to the embodiment of the computing device, and the same technical effect can be achieved.
Embodiments of the present invention also provide a computer-readable storage medium storing instructions that, when executed on a computer, cause the computer to perform the method as described above. All the implementation manners in the above method embodiments are applicable to the embodiment of the computer-readable storage medium, and the same technical effect can be achieved.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or a part thereof which substantially contributes to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk or an optical disk, and various media capable of storing program codes.
Furthermore, it is to be noted that in the device and method of the invention, it is obvious that the individual components or steps can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present invention. Also, the steps of performing the series of processes described above may naturally be performed chronologically in the order described, but need not necessarily be performed chronologically, and some steps may be performed in parallel or independently of each other. It will be understood by those skilled in the art that all or any of the steps or elements of the method and apparatus of the present invention may be implemented in any computing device (including processors, storage media, etc.) or network of computing devices, in hardware, firmware, software, or any combination thereof, which can be implemented by those skilled in the art using their basic programming skills after reading the description of the present invention.
Thus, the objects of the invention may also be achieved by running a program or a set of programs on any computing device. The computing device may be a well-known general purpose device. The object of the invention is thus also achieved solely by providing a program product comprising program code for implementing the method or the apparatus. That is, such a program product also constitutes the present invention, and a storage medium storing such a program product also constitutes the present invention. It is to be understood that the storage medium may be any known storage medium or any storage medium developed in the future. It is also noted that in the apparatus and method of the present invention, it is apparent that each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be regarded as equivalents of the present invention. Also, the steps of executing the series of processes described above may naturally be executed chronologically in the order described, but need not necessarily be executed chronologically. Some steps may be performed in parallel or independently of each other.
While the foregoing is directed to the preferred embodiment of the present invention, it will be appreciated by those skilled in the art that various changes and modifications may be made therein without departing from the principles of the invention as set forth in the appended claims.

Claims (10)

1. A process address space management method based on dynamic memory allocation technology is characterized by comprising the following steps:
dividing a dynamic mapping address space of a process into a plurality of mutually disjoint memory areas, wherein each memory area is provided with a plurality of linear areas, and each memory area corresponds to a CPU (Central processing Unit) kernel;
and organizing and managing the plurality of linear zones based on a lock-free skip table, wherein the lock-free skip table comprises a plurality of layers of linked lists, each layer of linked list comprises nodes corresponding to the indexes of at least one linear zone, and in the two adjacent layers of linked lists, the upper layer of linked list is a subset of the lower layer of linked list.
2. The method for managing process address space based on dynamic memory allocation technique according to claim 1, further comprising:
and when receiving the call of the multithread application program at the upper layer, providing the dynamic mapping address space as a shared memory for the multithread application program.
3. The method according to claim 1, wherein the dynamic mapping address space of the process includes a region of a memory mapping segment mmap; the method further comprises the following steps:
after the operating system calls a memory mapping section mmap operation, a new linear area is created;
inserting the new linear region into the dynamically mapped address space.
4. The method for managing process address space based on dynamic memory allocation technique according to claim 3, further comprising:
deleting the memory mapping in the dynamic mapping address space of the process by calling a memory mapping deletion function; the memory mapping deleting function deletes the memory mapping by calling a deleting executing function according to the first parameter and the second parameter; the first parameter is a starting address of the memory map, and the second parameter is a length of the memory map.
5. The method for managing process address space based on dynamic memory allocation technique according to claim 1, wherein the linear region is represented by a linear region descriptor, and the linear region descriptor holds: the size of the interval of the linear area, the starting and ending addresses of the interval, the attribute of the interval and the information of the nodes of the red and black trees to which the interval belongs; each linear region descriptor holds a node in the hop table.
6. The method for managing process address space based on dynamic memory allocation technique according to claim 5, further comprising:
receiving an access request to a target linear area, wherein the access request carries an index of the target linear area;
searching a node corresponding to the index according to the index in a first layer linked list in the lock-free jump table, and if the node is searched, returning a search result; otherwise, jumping to the second layer chain table for continuous searching, and returning a searching result obtained by searching in the Nth layer chain table, wherein N is greater than or equal to 2.
7. The method for managing process address space based on dynamic memory allocation technique according to claim 6, further comprising:
when a thread deletes a node from an object, pointing to the memory space occupied by the deleted node through a risk pointer;
and recovering the deleted memory space according to the risk pointer.
8. A process address space management device based on dynamic memory allocation technology is characterized by comprising:
the system comprises a dividing module, a processing module and a processing module, wherein the dividing module is used for dividing a dynamic mapping address space of a process into a plurality of mutually disjoint memory areas, each memory area is provided with a plurality of linear areas, and each memory area corresponds to a CPU core;
and the management module is used for organizing and managing the plurality of linear zones based on a lock-free jump table, the lock-free jump table comprises a plurality of layers of linked lists, each layer of linked list comprises nodes corresponding to the indexes of at least one linear zone, and in the two adjacent layers of linked lists, the upper layer of linked list is a subset of the lower layer of linked list.
9. A computing device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium storing instructions that, when executed on a computer, cause the computer to perform the method of any one of claims 1 to 7.
CN202211502874.8A 2022-11-29 2022-11-29 Process address space management method and device based on dynamic memory allocation technology Pending CN115586973A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211502874.8A CN115586973A (en) 2022-11-29 2022-11-29 Process address space management method and device based on dynamic memory allocation technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211502874.8A CN115586973A (en) 2022-11-29 2022-11-29 Process address space management method and device based on dynamic memory allocation technology

Publications (1)

Publication Number Publication Date
CN115586973A true CN115586973A (en) 2023-01-10

Family

ID=84783306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211502874.8A Pending CN115586973A (en) 2022-11-29 2022-11-29 Process address space management method and device based on dynamic memory allocation technology

Country Status (1)

Country Link
CN (1) CN115586973A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100042584A1 (en) * 2008-08-13 2010-02-18 Sun Microsystems, Inc. Concurrent lock-free skiplist with wait-free contains operator
US20120323970A1 (en) * 2011-06-18 2012-12-20 Microsoft Corporation Dynamic lock-free hash tables

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100042584A1 (en) * 2008-08-13 2010-02-18 Sun Microsystems, Inc. Concurrent lock-free skiplist with wait-free contains operator
US20120323970A1 (en) * 2011-06-18 2012-12-20 Microsoft Corporation Dynamic lock-free hash tables

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘慎明: "系统软件并行与安全的关键技术研究", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 *
王明贞等: "基于多核网络处理器的高效流管理技术研究", 《小型微型计算机系统》 *

Similar Documents

Publication Publication Date Title
EP2488950B1 (en) A tiered data management method and system for high performance data monitoring
US7716448B2 (en) Page oriented memory management
JP5147280B2 (en) System and method for garbage collection in heterogeneous multiprocessor systems
US7409487B1 (en) Virtualization system for computers that use address space indentifiers
KR920005853B1 (en) Apparatus for controlling input/output operation in virtual memory/visual computer type data processing system
KR101367450B1 (en) Performing concurrent rehashing of a hash table for multithreaded applications
US20010011338A1 (en) System method and apparatus for providing linearly scalable dynamic memory management in a multiprocessing system
US7493464B2 (en) Sparse matrix
US9069477B1 (en) Reuse of dynamically allocated memory
US8275968B2 (en) Managing unallocated storage space using extents and bitmaps
CN110727675A (en) Method and device for processing linked list
US10303383B1 (en) System and method for implementing non-blocking, concurrent hash tables
US20210271598A1 (en) Multi-Ring Shared, Traversable, and Dynamic Advanced Database
CN107408132B (en) Method and system for moving hierarchical data objects across multiple types of storage
CN114327917A (en) Memory management method, computing device and readable storage medium
US7711921B2 (en) Page oriented memory management
US7991976B2 (en) Permanent pool memory management method and system
Chen et al. Khuzdul: Efficient and scalable distributed graph pattern mining engine
Silberschatz et al. Operating systems
CN115586973A (en) Process address space management method and device based on dynamic memory allocation technology
CN115729708A (en) Dynamic memory scheduling method, device and equipment for realizing high-efficiency data processing
JP4845149B2 (en) Management device, management program, and management method for managing data
CN113535392B (en) Memory management method and system for realizing support of large memory continuous allocation based on CMA
JPH02162439A (en) Free list control system for shared memory
Shahar et al. Supporting data-driven I/O on GPUs using GPUfs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20230110