CN116860441A - Method, system, terminal equipment and storage medium for managing small-volume external memory based on Bitmap - Google Patents

Method, system, terminal equipment and storage medium for managing small-volume external memory based on Bitmap Download PDF

Info

Publication number
CN116860441A
CN116860441A CN202310789310.5A CN202310789310A CN116860441A CN 116860441 A CN116860441 A CN 116860441A CN 202310789310 A CN202310789310 A CN 202310789310A CN 116860441 A CN116860441 A CN 116860441A
Authority
CN
China
Prior art keywords
memory
bitmap
small
chunk
pages
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310789310.5A
Other languages
Chinese (zh)
Inventor
李明
孙炎森
徐晓剑
李春兰
张秋怡
刘磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Citic Bank Corp Ltd
Original Assignee
China Citic Bank Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Citic Bank Corp Ltd filed Critical China Citic Bank Corp Ltd
Priority to CN202310789310.5A priority Critical patent/CN116860441A/en
Publication of CN116860441A publication Critical patent/CN116860441A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a Bitmap-based method, a Bitmap-based system, a Bitmap-based terminal device and a Bitmap-based storage medium for managing a small amount of external memory, and relates to the field of computer systems. The invention uses the Bitmap data structure to manage and manage the cache of the small memory, avoids the burden of frequently allocating the memory to the system, has high allocation and searching efficiency of the Bitmap to a large amount of small memory and small consumption, uses the off-heap memory as the storage for receiving and sending data under the scene of small data volume and high concurrency, applies for memory blocks in a unit of trunk bit, uses the memory pool of the tree structure to divide the large memory into pages, further divides the pages into sub-pages according to the application size, uses the Bitmap to manage the small memory sub-pages, can recycle the memory in the memory pool, directly searches the available memory from the memory pool to be directly allocated when applying for the memory for the next time, and uses the Bitmap to manage the small memory, thereby being capable of efficiently allocating and recycling the memory with very small cost and greatly improving the communication efficiency.

Description

Method, system, terminal equipment and storage medium for managing small-volume external memory based on Bitmap
Technical Field
The invention relates to the field of computer systems, in particular to a Bitmap-based method, a Bitmap-based system, a Bitmap-based terminal device and a Bitmap-based storage medium for managing a small amount of external memory.
Background
The off-heap memory refers to the memory of the memory objects allocated outside the heap of the Java virtual machine, and the memory is directly managed by an operating system, but is not a virtual machine, so that the influence of garbage collection on an application program can be reduced to a certain extent.
A Bitmap is a data structure representing a condensed set in a finite field, each element occurring at least once, without other data and elements being associated.
The performance can be obviously improved under some scenes of the out-of-heap memory, and the out-of-heap memory can avoid the virtual machine heap and the operating system kernel from copying data back and forth and is not controlled by the JVM. There are several advantages over in-heap memory:
1. the work of garbage collection can be reduced because garbage collection can suspend other works.
2. Second, the speed of replication is increased. Because the heap memory is refreshed to a remote place, the heap memory is copied to a direct memory (non-heap memory) and then sent; and off-heap memory is equivalent to omitting this task.
But also have some drawbacks:
1. off-heap memory is difficult to manage and if improperly used, memory leaks can easily result.
2. Application of the memory outside the heap is complex, time consumption is long, frequent application is performed, and resources are wasted in releasing.
Disclosure of Invention
The embodiment of the invention provides a method, a system, a terminal device and a storage medium for managing a small-volume extra-heap memory based on a Bitmap, which uses a Bitmap data structure to manage and manage the cache of the small memory, avoids the burden of frequent memory allocation to the system, and has high allocation and searching efficiency of the Bitmap to a large amount of small memory and low consumption; when a user thread requests a memory, memory blocks closest to the size are allocated according to the size of the requested memory, so that memory fragmentation is reduced, and memory waste is well avoided; in a high concurrency system, a plurality of distributors can be used for separating thread locks, and each thread maintains a memory cache structure of a Bitmap structure, so that competition of each thread in memory distribution is avoided, and the efficiency of memory distribution is greatly improved.
A small-volume external memory management method based on Bitmap comprises the following specific steps:
step S21, the application program applies for using the small block of the external memory;
step S22, searching a related memory distributor according to the current thread or other objects capable of avoiding resource competition, if not, creating one, and using the distributor to distribute the memory later;
step S23, the applied memory number is normalized upwards;
step S24, judging whether small memory meeting the requirement exists, if not, firstly applying for a large memory block and giving the large memory block to the object Chunk for management, and if so, executing step S27;
step S25, creating and initializing a Chunk, wherein the initialization comprises some basic parameters and data structures, and adding the basic parameters and the data structures into a Chunk list;
s26, splitting a memory block managed by a Chunk into a plurality of pages with a complete binary tree structure according to the size of a basic unit PageUnit;
step S27, a page is allocated in the tree for allocation of the small memory, whether the allocation is performed for the first time is judged, and if not, step S28 is executed; if so, the normalized size is divided into a plurality of sub-ages according to the actual size, for example, the page size is 8k, the applied actual memory is 100b, if the normalized size is 112b after the normalized size is a multiple of 16, one page can be divided into 8k/112 b=73 sub-ages, the sub-ages are added into a sub-ages linked list, and the sub-ages are initialized by a key data structure Bitmap, wherein the initialization comprises:
elemSize: the size of each memory block is such that,
maxNumElems: the number of memory blocks in the memory is,
bitmapLength: the number of long elements used by bitmap,
next avail: the next available bit, initial bit 0;
step S28, finding the next unallocated bit in the Bitmap, setting the bit as occupied, writing back into the Bitmap, indicating that the sub-page represented by the bit is used, and if all bits are marked, removing the sub-page from the linked list;
step S29, using the application program memory.
Step S30, after the use is completed, the small memory is released back to the subtrees, and the Bitmap flag bit is modified to be unused, and the next application is waited for.
Further: the system of the invention comprises: the memory management module is used for searching or creating a related memory distributor according to the current thread or other objects capable of avoiding resource competition, and normalizing the number of the applied memories upwards;
the Chunk module is used for managing the memory blocks and splitting the managed memory blocks into a plurality of pages with a complete binary tree structure according to the PageUnit size;
and the Chunk management module is used for creating and initializing Chunk.
Further: the terminal device may include: the system comprises a processor, a storage medium and a bus, wherein the storage medium stores machine-readable instructions executable by the processor, when the terminal device is running, the processor communicates with the storage medium through the bus, and the processor executes the machine-readable instructions to execute the steps of the deep learning model training method as described in the previous embodiment.
Further: a storage medium storing a computer program which, when executed by a processor, performs the steps of the method described above.
The invention has the beneficial effects that: in the invention, under the condition of small data volume and high concurrency, the off-heap memory is used as the storage for data receiving and transmitting, the memory blocks are applied in a trunk bit unit, the large memory is divided into pages by using the memory pool with a tree structure, the pages are further divided into sub-pages according to the application size, the sub-pages of the small memory are managed by using the Bitmap, the memory in the memory pool can be recycled, the available memory is directly searched from the memory pool for direct allocation when the memory is applied next time, the small memory is managed by the Bitmap, the memory can be allocated and recovered efficiently with very small cost, and the communication efficiency is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Figure 1 shows a schematic flow chart of the method of the invention.
Fig. 2 shows a schematic diagram of the memory hierarchy of the present invention.
FIG. 3 shows a schematic diagram of the composition of the small memory Bitmap memory structure of the present invention.
Fig. 4 shows a schematic diagram of the composition of the terminal device of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described with reference to the accompanying drawings in the embodiments of the present invention, and it should be understood that the drawings in the present invention are for the purpose of illustration and description only and are not intended to limit the scope of the present invention. In addition, it should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in this disclosure, illustrates operations implemented according to some embodiments of the present invention. It should be understood that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Moreover, one or more other operations may be added to or removed from the flow diagrams by those skilled in the art under the direction of the present disclosure.
In addition, the described embodiments of the invention are only some, but not all, embodiments of the invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
It should be noted that the term "comprising" will be used in embodiments of the invention to indicate the presence of the features stated hereafter, but not to exclude the addition of other features. It should also be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. In the description of the present invention, it should also be noted that the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
When the memory is applied, a plurality of blocks of continuous memory, called Chunk, are firstly applied to the system according to the thread number, and the default size chunkSize is set, for example, 16Mb, and the memory is packaged through a Chunk object. In order to manage finer granularity, the chunk is further split into pages, the size of each page is set according to the situation, for example, 8Kb, then each chunk contains 2048 pages, the scheme is mainly small memory allocation management, for memory allocation smaller than 8K, waste is caused by direct allocation of one page by the small objects, the page is split into a plurality of small memory sub-pages according to the size when applied, the management of the small memory is managed by a Bitmap, the sub-page determines the size of each memory when initialized, and only memory space with the same size can be contained in one sub-page, so that sub-pages with the same size can be gathered into a linked list, and each search can be completed by searching from the linked list head, and the sub-page can be released from the linked list after the sub-page is used up.
When the same size memory area is requested next time, it is good to directly locate the head that meets the condition and see if there is any available inside.
If the memory of application 100b is normalized to 112b, a page can be divided into 8kb/112 b=73 parts, and the 73 parts of small memory can be directly obtained from where to use and apply for the next time for small memory by using a Bitmap, wherein the Bitmap is a long array, each bit on each long element can represent whether a memory block is used or not, the application is marked from low order to high order, the next time the application is applied for finding the first unused memory from low order to high order, because a long memory has 64 bits, and the Bitmap with the use length of 2 for 73 parts of small memory can be processed, and the cost is very small through displacement operation and marking.
Fig. 1 shows a Bitmap-based small memory usage interaction processing flow chart, which specifically includes the following steps:
step S21, the application program applies for using the small block of the external memory;
step S22, searching a related memory distributor according to the current thread or other objects capable of avoiding resource competition, if not, creating one, and using the distributor to distribute the memory later;
step S23, the applied memory number is normalized upwards;
step S24, judging whether small memory meeting the requirement exists, if not, firstly applying for a large memory block and giving the large memory block to the object Chunk for management, and if so, executing step S27;
step S25, creating and initializing a Chunk, wherein the initialization comprises some basic parameters and data structures, and adding the basic parameters and the data structures into a Chunk list;
s26, splitting a memory block managed by a Chunk into a plurality of pages with a complete binary tree structure according to the size of a basic unit PageUnit;
step S27, a page is allocated in the tree for allocation of the small memory, whether the allocation is performed for the first time is judged, and if not, step S28 is executed; if so, the normalized size is divided into a plurality of sub-ages according to the actual size, for example, the page size is 8k, the applied actual memory is 100b, if the normalized size is 112b after the normalized size is a multiple of 16, one page can be divided into 8k/112 b=73 sub-ages, the sub-ages are added into a sub-ages linked list, and the sub-ages are initialized by a key data structure Bitmap, wherein the initialization comprises:
elemSize: the size of each memory block is such that,
maxNumElems: the number of memory blocks in the memory is,
bitmapLength: the number of long elements used by bitmap,
next avail: the next available bit, initial bit 0;
step S28, finding the next unallocated bit in the Bitmap, setting the bit as occupied, writing back into the Bitmap, indicating that the sub-page represented by the bit is used, and if all bits are marked, removing the sub-page from the linked list;
step S29, using an application program memory;
step S30, after the use is completed, the small memory is released back to the subtrees, and the Bitmap flag bit is modified to be unused, and the next application is waited for.
In the process of sending and receiving data of a java remote communication component, under a small data volume and high concurrency scene, an off-heap memory is used as storage for data receiving and sending, a memory block is applied for in a chunk bit unit, a tree-structured memory pool is used for dividing a large memory into pages, the pages are further divided into sub-pages according to the application size, the sub-pages of the small memory are managed by using a Bitmap, the memory in the memory pool can be reused, the available memory is directly searched from the memory pool for direct distribution when the memory is applied for the next time, the small memory is managed by the Bitmap, the memory can be distributed and recovered efficiently at very small cost, and the communication efficiency is greatly improved.
2-3, which illustrate a memory hierarchy and a small memory Bitmap storage structure, the system of the present invention includes:
the memory management module is used for searching or creating a related memory distributor according to the current thread or other objects capable of avoiding resource competition, and normalizing the number of the applied memories upwards;
the Chunk module is used for managing the memory blocks and splitting the managed memory blocks into a plurality of pages with a complete binary tree structure according to the PageUnit size;
and the Chunk management module is used for creating and initializing Chunk.
As shown in fig. 4, the terminal device 6 may include: processor 601, storage medium 602, and bus 603, storage medium 602 storing machine-readable instructions executable by processor 601, when the terminal device is running, the processor 601 communicates with storage medium 602 via bus 603, and processor 601 executes the machine-readable instructions to perform the steps of the deep learning model training method as described in the previous embodiments. The specific implementation manner and the technical effect are similar, and are not repeated here.
For ease of illustration, only one processor is described in the above terminal device. It should be noted, however, that in some embodiments, the terminal device of the present invention may also include multiple processors, and thus, the steps performed by one processor described in the present invention may also be performed jointly by multiple processors or separately.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily appreciate variations or alternatives within the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (7)

1. A Bitmap-based small-volume external memory management method is characterized by comprising the following specific steps:
step S21, the application program applies for using the small block of the external memory;
step S22, searching a related memory distributor according to the current thread or other objects capable of avoiding resource competition, if not, creating one, and using the distributor to distribute the memory later;
step S23, the applied memory number is normalized upwards;
step S24, judging whether small memory meeting the requirement exists, if not, firstly applying for a large memory block and giving the large memory block to the object Chunk for management, and if so, executing step S27;
step S25, creating and initializing a Chunk, wherein the initialization comprises some basic parameters and data structures, and adding the basic parameters and the data structures into a Chunk list;
s26, splitting a memory block managed by a Chunk into a plurality of pages with a complete binary tree structure according to the size of a basic unit PageUnit;
step S27, a page is allocated in the tree for allocation of the small memory, whether the allocation is performed for the first time is judged, and if not, step S28 is executed; if so, dividing the normalized size into a plurality of sub-ages on average according to the actual size;
step S28, finding the next unallocated bit in the Bitmap, setting the bit as occupied, and writing back into the Bitmap, wherein the index represents the sub-bpage used;
step S29, using an application program memory;
step S30, after the use is completed, the small memory is released back to the subtrees.
2. The method of claim 1, wherein the initialized content comprises:
elemSize: the size of each memory block;
maxNumElems: the number of memory blocks;
bitmapLength: the number of long elements used by bitmap;
next avail: the next available bit, initial bit 0.
3. The method of claim 1 wherein the sub-page is removed from the linked list if all bits are marked in step S28.
4. The method of claim 1, wherein step S30 further comprises modifying the Bitmap flag bit to be unused, waiting for a next application.
5. A Bitmap-based micro-extra-heap memory management system, comprising:
the memory management module is used for searching or creating a related memory distributor according to the current thread or other objects capable of avoiding resource competition, and normalizing the number of the applied memories upwards;
the Chunk module is used for managing the memory blocks and splitting the managed memory blocks into a plurality of pages with a complete binary tree structure according to the PageUnit size;
and the Chunk management module is used for creating and initializing Chunk.
6. A terminal device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over the bus when the terminal device is operating, the processor executing the machine-readable instructions to perform the steps of the method of any of claims 1 to 4 when executed.
7. A storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method according to any of claims 1 to 4.
CN202310789310.5A 2023-06-30 2023-06-30 Method, system, terminal equipment and storage medium for managing small-volume external memory based on Bitmap Pending CN116860441A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310789310.5A CN116860441A (en) 2023-06-30 2023-06-30 Method, system, terminal equipment and storage medium for managing small-volume external memory based on Bitmap

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310789310.5A CN116860441A (en) 2023-06-30 2023-06-30 Method, system, terminal equipment and storage medium for managing small-volume external memory based on Bitmap

Publications (1)

Publication Number Publication Date
CN116860441A true CN116860441A (en) 2023-10-10

Family

ID=88220911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310789310.5A Pending CN116860441A (en) 2023-06-30 2023-06-30 Method, system, terminal equipment and storage medium for managing small-volume external memory based on Bitmap

Country Status (1)

Country Link
CN (1) CN116860441A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117555674A (en) * 2023-10-26 2024-02-13 南京集成电路设计服务产业创新中心有限公司 Efficient multithreading batch processing block memory pool management method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117555674A (en) * 2023-10-26 2024-02-13 南京集成电路设计服务产业创新中心有限公司 Efficient multithreading batch processing block memory pool management method
CN117555674B (en) * 2023-10-26 2024-05-14 南京集成电路设计服务产业创新中心有限公司 Efficient multithreading batch processing block memory pool management method

Similar Documents

Publication Publication Date Title
US6175900B1 (en) Hierarchical bitmap-based memory manager
US8321638B2 (en) Cooperative mechanism for efficient application memory allocation
JP3611305B2 (en) Persistent and robust storage allocation system and method
US7743222B2 (en) Methods, systems, and media for managing dynamic storage
JP3771803B2 (en) System and method for persistent and robust memory management
EP3504628B1 (en) Memory management method and device
CN102289409B (en) The memory allocator of layered scalable
CN110688345A (en) Multi-granularity structured space management mechanism of memory file system
US6804761B1 (en) Memory allocation system and method
US7493464B2 (en) Sparse matrix
CN102985910A (en) GPU support for garbage collection
US10853140B2 (en) Slab memory allocator with dynamic buffer resizing
US10261918B2 (en) Process running method and apparatus
CN110674052B (en) Memory management method, server and readable storage medium
CN116860441A (en) Method, system, terminal equipment and storage medium for managing small-volume external memory based on Bitmap
CN114327917A (en) Memory management method, computing device and readable storage medium
CN108196937B (en) Method and device for processing character string object, computer equipment and storage medium
US5678024A (en) Method and system for dynamic performance resource management within a computer based system
US20190196914A1 (en) System and Method for Creating a Snapshot of a Subset of a Database
CN114296658B (en) Storage space allocation method and device, terminal equipment and storage medium
CN113535392B (en) Memory management method and system for realizing support of large memory continuous allocation based on CMA
CN115756838A (en) Memory release method, memory recovery method, memory release device, memory recovery device, computer equipment and storage medium
CN112948336B (en) Data acceleration method, cache unit, electronic device and storage medium
CN112947863A (en) Method for combining storage spaces under Feiteng server platform
US20120011330A1 (en) Memory management apparatus, memory management method, program therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination