CN114356795A - Memory management method and related device - Google Patents

Memory management method and related device Download PDF

Info

Publication number
CN114356795A
CN114356795A CN202111661779.8A CN202111661779A CN114356795A CN 114356795 A CN114356795 A CN 114356795A CN 202111661779 A CN202111661779 A CN 202111661779A CN 114356795 A CN114356795 A CN 114356795A
Authority
CN
China
Prior art keywords
memory
data
unused
stored
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111661779.8A
Other languages
Chinese (zh)
Inventor
李亮
黄向平
杨毅
阎松柏
刘中一
刘辉
何友超
张鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Travelsky Technology Co Ltd
Original Assignee
China Travelsky Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Travelsky Technology Co Ltd filed Critical China Travelsky Technology Co Ltd
Priority to CN202111661779.8A priority Critical patent/CN114356795A/en
Publication of CN114356795A publication Critical patent/CN114356795A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Memory System (AREA)

Abstract

The embodiment of the application discloses a memory management method and a related device. And determining the preset memory corresponding to the data to be stored according to the target data type. When the data volume of the data to be stored exceeds the storable data volume corresponding to the preset memory, the processing device may redetermine an unused memory satisfying the data volume of the data to be stored from an unused memory set, and then determine a new target memory corresponding to a target data type according to the unused memory, thereby implementing dynamic adjustment of the memory, and the adjustment process does not involve memories corresponding to other data types, and therefore, no influence is caused on other used memories, a memory database does not need to be reconstructed, memory operation and maintenance pressure is reduced, and flexibility and efficiency of memory management are improved.

Description

Memory management method and related device
Technical Field
The present application relates to the field of data processing, and in particular, to a memory management method and a related apparatus.
Background
The memory is a part in charge of storing data in the terminal device, and when the data is stored, a storage area needs to be divided for the data in the memory, and then the corresponding data is stored in the divided area.
In the related art, if the data to be stored exceeds the size of the memory, the memory needs to be divided for various data again, the whole memory database is reconstructed, the memory operation and maintenance pressure is high, and meanwhile, the time required for updating is long.
Disclosure of Invention
In order to solve the technical problem, the present application provides a memory management method, when data to be stored exceeds the storable amount of a preset memory, an unused memory with enough memory capacity can be determined from an unused memory set to generate a new memory under the data type, and memories corresponding to other data types do not need to be changed, so that a memory database does not need to be reconstructed, and the memory operation and maintenance pressure is reduced.
The embodiment of the application discloses the following technical scheme:
in a first aspect, an embodiment of the present application discloses a memory management method, where the method includes:
acquiring data to be stored, wherein the data to be stored has a corresponding target data type;
determining a preset memory corresponding to the target data type;
in response to that the data volume of the data to be stored exceeds the storable data volume corresponding to the preset memory, acquiring an unused memory meeting the data volume of the data to be stored from an unused memory set, wherein the unused memory has continuous memory addresses;
determining a target memory corresponding to the target data type according to the unused memory;
and storing the data to be stored through the target memory.
In a second aspect, an embodiment of the present application discloses a memory management device, where the device includes an obtaining unit, a first determining unit, a first responding unit, a second determining unit, and a storing unit:
the acquisition unit is used for acquiring data to be stored, and the data to be stored has a corresponding target data type;
the first determining unit is configured to determine a preset memory corresponding to the target data type;
the first response unit is configured to, in response to that the data amount of the to-be-stored data exceeds the storable data amount corresponding to the preset memory, acquire an unused memory that satisfies the data amount of the to-be-stored data from an unused memory set, where the unused memory has consecutive memory addresses;
the second determining unit is configured to determine, according to the unused memory, a target memory corresponding to the target data type;
and the storage unit is used for storing the data to be stored through the target memory.
In a third aspect, an embodiment of the present application discloses a computer device, where the device includes a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the memory management method according to the instruction in the program code.
In a fourth aspect, an embodiment of the present application discloses a computer-readable storage medium, where the computer-readable storage medium is used to store a computer program, and the computer program is used to execute the memory management method in the first aspect.
In a fifth aspect, an embodiment of the present application discloses a computer program product including instructions, which when run on a computer, cause the computer to execute the memory management method described in the first aspect.
According to the technical scheme, when the data are stored, the processing equipment can firstly acquire the data to be stored, and the data to be stored have the corresponding target data type. And determining the preset memory corresponding to the data to be stored according to the target data type. When the data volume of the data to be stored exceeds the storable data volume corresponding to the preset memory, the processing device may redetermine an unused memory satisfying the data volume of the data to be stored from an unused memory set, and then determine a new target memory corresponding to a target data type according to the unused memory, thereby implementing dynamic adjustment of the memory, and the adjustment process does not involve memories corresponding to other data types, and therefore, no influence is caused on other used memories, a memory database does not need to be reconstructed, memory operation and maintenance pressure is reduced, and flexibility and efficiency of memory management are improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a memory management method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a memory management method according to an embodiment of the present disclosure;
fig. 3 is an architecture diagram of a memory management method in an actual application scenario according to an embodiment of the present application;
fig. 4 is a flowchart of a memory management method in an actual application scenario according to an embodiment of the present application;
fig. 5 is a block diagram illustrating a memory management device according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a computer program provided in an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the accompanying drawings.
Memory mapped Database (Memory MMap Database) is a Memory Database technology that has been gradually developed in recent years after computer hardware devices reach 64 bits in common. In the memory mapping database, data are stored in the high-order address of the memory, and the mapping between the memory and a file is realized by using an MMap method, so that the serialization and the deserialization of the data are realized. The data access of the memory mapping database is centralized in the memory, so that the data access efficiency is extremely high. In the process of constructing a database, a storage address is generally required to be statically allocated in advance for each data classification, and all data objects in the classification are stored according to a specified address sequence and occupy a continuous memory block. When data increment is updated, once the data volume of a certain classification is increased to cause the required continuous memory block to exceed the allocated size, the whole database needs to be reconstructed, the pressure of memory operation and maintenance is high, and the data updating and new data providing service is not timely enough.
In order to solve the technical problem, the present application provides a memory management method, when data to be stored exceeds the storable amount of a preset memory, an unused memory with sufficient memory capacity can be determined from an unused memory set and determined as a new memory in the data type, and memories corresponding to other data types do not need to be changed, so that a memory database does not need to be reconstructed, and the memory operation and maintenance pressure is reduced.
It is understood that the method may be applied to a processing device, which is a processing device capable of performing memory management, for example, a terminal device or a server with a memory management function. The method can be independently executed through the terminal equipment or the server, can also be applied to a network scene of communication between the terminal equipment and the server, and is executed through the cooperation of the terminal equipment and the server. The terminal device may be a computer, a mobile phone, or the like. The server may be understood as an application server or a Web server, and in actual deployment, the server may be an independent server or a cluster server.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
Next, a memory management method provided in an embodiment of the present application will be described with reference to the drawings. Referring to fig. 1, fig. 1 is a flowchart of a memory management method according to an embodiment of the present application, where the method includes:
s101: and acquiring data to be stored.
The data to be stored is data which needs to be stored in the memory, and the data to be stored has a corresponding target data type.
S102: and determining a preset memory corresponding to the target data type.
The predetermined memory is a memory pre-allocated in the memory for the target data type. When allocating the memory, the processing device may perform the following steps:
1) if the estimated memory size <, which is the unit storage data volume size, the memory to be allocated is the unit storage data volume size; otherwise, turning to 2);
2) if the estimated memory size is larger than the unit storage data volume size and is less than 2 × the unit storage data volume size, the memory to be allocated is the unit storage data volume size × 2, and if not, the operation is switched to 3);
3) if the estimated memory size is larger than 2 × unit storage data volume and smaller than 4 × unit storage data volume, the memory to be allocated is 4 × unit storage data volume;
by analogy, when the estimated memory size exceeds the size of the data amount stored in the plurality of units, instead of simply allocating one more data amount stored in the unit, the memory which is closest to the estimated memory size and is 2 times smaller than the data amount stored in the plurality of units of the estimated memory size at the moment is allocated, so that the preset memory size which is sufficient can be provided, as shown in fig. 2. .
S103: and acquiring the unused memory meeting the data volume of the data to be stored from the unused memory set in response to the fact that the data volume of the data to be stored exceeds the storable data volume corresponding to the preset memory.
The storable data amount refers to a data amount capable of storing data, and when the data amount of the data to be stored does not exceed the storable data amount, it indicates that the preset memory can accommodate the data to be stored at the moment, and the processing device can directly store the data to be stored in the preset memory; if the data amount of the data to be stored exceeds the storable data amount corresponding to the preset memory, it indicates that the preset memory cannot accommodate the data to be stored, at this time, the processing device may obtain an unused memory that satisfies the data amount of the data to be stored from an unused memory set, where the unused memory refers to a memory that has not been allocated to a certain data type, and the unused memory has consecutive memory addresses, that is, the unused memory is a continuous memory instead of a plurality of small memories.
S104: and determining a target memory corresponding to the target data type according to the unused memory.
In one aspect, when determining that the memory is not used, the processing device may determine on the basis of a preset memory, that is, the determined memory is adjacent to the preset memory in terms of memory addresses, and a sum of storable data amounts of the unused memory and the preset memory satisfies a data amount of the data to be stored. At this time, the processing device may splice the unused memory and a preset memory based on the memory address, and determine a target memory corresponding to the target data type.
On the other hand, when the unused memory set does not include an adjacent memory which can be spliced with the preset memory to obtain a sufficient target memory, the processing device may directly determine an unused memory which can contain the data to be stored from the unused memory set, and then directly determine the unused memory as the target memory. At this time, the processing device may also change the preset memory into an unused memory, so that other data to be stored may be used.
S105: and storing the data to be stored through the target memory.
According to the technical scheme, when the data are stored, the processing equipment can firstly acquire the data to be stored, and the data to be stored have the corresponding target data type. And determining the preset memory corresponding to the data to be stored according to the target data type. When the data volume of the data to be stored exceeds the storable data volume corresponding to the preset memory, the processing device may redetermine an unused memory satisfying the data volume of the data to be stored from an unused memory set, and then determine a new target memory corresponding to a target data type according to the unused memory, thereby implementing dynamic adjustment of the memory, and the adjustment process does not involve memories corresponding to other data types, and therefore, no influence is caused on other used memories, a memory database does not need to be reconstructed, memory operation and maintenance pressure is reduced, and flexibility and efficiency of memory management are improved.
In a possible implementation manner, the unused memory sets include an unused continuous memory set and an unused discontinuous memory set, the continuous memory refers to a memory whose storable data amount is an integer multiple of a unit stored data amount, the unit stored data amount refers to a basic unit of the memory data amount, and the discontinuous memory refers to a memory whose storable data amount is not an integer multiple of the unit stored data amount.
The processing device may determine whether an unused memory satisfying the data size of the data to be stored exists in the unused continuous memory set, if so, acquire the unused memory from the unused continuous memory set, and if not, acquire the unused memory from the unused discontinuous memory set, so that the continuous memory with a more reasonable memory size may be preferentially extracted for memory expansion.
In addition, in a possible implementation manner, the processing device may perform a splicing process on the non-contiguous memories in the non-used non-contiguous memory set according to the memory addresses, where the splicing process is used to merge multiple non-contiguous memories adjacent to the memory addresses into one memory, so that the scattered memories can be integrated, and a memory with a more sufficient data size is provided for the subsequent data to be stored. The unused discontinuous memory set may include a target discontinuous memory, and in response to that the storable data amount of the target discontinuous memory is an integral multiple of the unit stored data amount, the processing device may transfer the target discontinuous memory into the unused continuous memory set, thereby implementing timely update of the discontinuous memory and the continuous memory.
In a possible implementation manner, in order to improve the utilization rate of the memory, the processing device may determine an actual data amount occupied by the data to be stored in the target memory after the data to be stored is stored in the target memory. And responding to the fact that the actual data size is smaller than the unit storage data size, and showing that the data to be stored only occupies a small part of the target memory actually, and a large amount of the target memory is not used. At this time, the processing device may determine a tail address of the memory address occupied by the data to be stored, where the tail address is located at an end position of the memory address, that is, a memory address portion after the tail address is an unoccupied memory portion of the data to be stored. Therefore, the processing device can segment the target memory according to the tail address, determine a target unused memory in the target memory, and then store the target unused memory into the unused memory set, thereby improving the utilization rate of memory resources.
In addition, before storing the data to be stored, part of the data in the target data type may be already stored in the preset memory. At this time, in order to ensure the security and integrity of the data, when the preset memory includes stored data in the stored target data type, the processing device may transfer the stored data to the target memory.
In order to facilitate understanding of the technical solution provided by the embodiment of the present application, a description will be given of a memory management method provided by the embodiment of the present application, in combination with an actual application scenario. In the practical application scenario, the memory is a memory block, the unit storage data amount is a default block, and the size of the default block may be set to 256M. The memory management method can be applied to the architecture shown in fig. 3. As shown in fig. 4, fig. 4 is a flowchart of a memory management method in an actual application scenario provided in the embodiment of the present application, where the method includes:
first, calculate the size1 of the pre-allocated memory
When the initial or newly added construction is carried out, calculating an initial estimated value of the size of the pre-allocated memory as temp ═ sizeof (Classa); when incremental updating is constructed, an initial estimated value of the size of the pre-allocated memory is calculated to be temp which is equal to the size of the original memory block. The calculation steps are as follows:
1) if the estimated memory size < is the default block size, the memory to be allocated is the default block size; otherwise, turning to 2);
2) if the estimated memory size is larger than the default block size and is less than 2, the memory to be allocated is 2 with the default block size; otherwise go to 3);
3) if the estimated memory size is larger than 2 × default block size and smaller than 4 × default block size, the memory to be allocated is 4 × default block size;
in this way, when the estimated memory size exceeds the size of a plurality of default blocks, one more default block is simply allocated, but a plurality of default blocks which are closest to the estimated memory size and are smaller than the estimated memory size are allocated by 2 times.
The data volume is regularly and smoothly increased under normal conditions, and the normal updating of the data volume can be effectively dealt with according to the processing procedure. Memory block M1 applied according to size1
Applying for the memory block according to the size1 through a memory supply component:
1) if the memory blocks are updated in an increment mode, the original memory blocks are preferentially distributed (when the memory blocks are updated in the increment mode, each memory block needing to be updated is released through a memory recovery component before the memory application, and the memory blocks are recorded into an available continuous memory container or an available discontinuous memory container); when the original memory block is smaller than the pre-estimated memory block or is newly added initially, turning to 2);
2) preferentially searching whether a memory block meeting the requirement exists in the unused continuous memory container, if so, identifying the memory block to the used continuous memory container, and removing the memory block from the unused continuous memory container;
3) if the unused continuous memory is not satisfied, searching whether a memory block meeting the requirement exists in the unused discontinuous memory container, if so, identifying the memory block to the used discontinuous memory container, and removing the memory block from the unused discontinuous memory container;
4) if the non-continuous memory is not used, the whole processing process fails due to insufficient memory; otherwise the context records the available memory block.
Thirdly, calculating the size of the actual data
After the data processing is completed, the actual data size of a certain partition is calculated, and the sum of all the data object sizes of the partition is used as an initial value. When the size of the memory does not exceed the size of the default block, the memory is still distributed according to the size of the default block; beyond the default block size, instead of simply allocating one more default block, 2 times as many default blocks are allocated that are closest to and less than the required memory size at that time. The calculation steps in calculating the pre-allocated memory size1 are the same.
Four, expand application for big enough memory block M2 according to size
Expanding the memory block by the memory allocation component:
preferentially expanding from the tail address of the applied memory block M1, if the memory block can be expanded and the sum of the expanded memory block and the M1 meets the required memory block, adding the expanded memory block to the M1, removing the expanded block from an unused continuous memory container or an unused non-continuous memory container, and returning to the expanded M1;
if the M1 cannot be expanded after the tail address (possibly because the memory behind the tail address is occupied or the available memory block is not enough), the M1 is recycled by the memory recycling component, and an unused continuous memory container or an available discontinuous memory container is recorded. The process of reclaiming M1 may include memory consolidation;
reapplication for the memory block at the new address, preferentially searching the unused continuous memory container, if the unused continuous memory meeting the condition exists, recording the memory block in the used continuous memory container, removing the unused continuous memory container, and returning to the memory block;
if the unused continuous memory does not meet the condition, searching an unused discontinuous memory container, if the unused discontinuous memory meets the condition, recording the memory block in the used discontinuous memory container, removing the unused discontinuous memory container, and returning the memory block;
if the non-continuous memory is not used, the whole processing process fails due to insufficient memory.
Fifthly, recycling redundant memory
When the memory block M1 actually used by a partition is smaller than the default block size, i.e., 256M, the excess memory needs to be recycled. The reclaimed memory blocks are written to unused non-contiguous memory containers for future use. If the memory block is concatenated with other available non-contiguous memory blocks, the non-contiguous memory blocks are merged, but the size of the merged non-contiguous memory block cannot exceed the default block size, i.e., 256M, and the merged memory block is still recorded in the unused non-contiguous memory containers regardless of the size after merging. The memory recovery process is as follows:
adding a new memory block object M2, wherein the tail address of M2 is set as the tail address of the used memory block M1;
modifying the tail address of M1 into the tail address of the last data, and writing the address into the head address of the new memory block object M2; therefore, M1 is divided into two memory blocks M1 and M2;
if the original M1 is a continuous memory, removing the continuous memory from the unused continuous memory container, and recording the unused discontinuous memory container; recording M2 to unused discontinuous memory;
the recycled M2 may be merged with other unused non-contiguous memory.
Sixthly, memory merging
After the non-continuous memory is recycled, if the memory blocks connected with the front and rear addresses are also non-continuous memories, the memory recycling component merges the available non-continuous memory blocks:
if the head address of the unused non-contiguous memory block M1 is the tail address of another unused non-contiguous memory block M2, if the sum of the SIZEs of M1 and M2 < > is the DEFAULT block SIZE defaultmemblk SIZE, the tail address of M2 is modified to the tail address of M1, and M1 is removed from the unused non-contiguous memory block container;
if the tail address of the unused non-contiguous memory block M1 is the head address of another unused non-contiguous memory block M2, if the sum of the SIZEs of M1 and M2 < > DEFAULT block SIZE defaultmemblk _ memory _ SIZE, the tail address of M1 is modified to the tail address of M2, and M2 is removed from the available non-contiguous memory block container.
Creating mapping file and serializing memory database
The processed data are sequentially written into the allocated memory blocks in sequence, each partitioned memory block independently creates a mapping file, mmap is called to realize memory file mapping, and a first parameter, namely a memory first address, is required to be appointed when a mmap function is called, so that the first address of the partitioned data in a memory mapping database is the first address. This address is calculated dynamically from the system by the size of each block of memory, rather than being pre-specified by a static memory allocation table.
Eight, creating an index
ClassA data objects are stored in different mmap mapping files according to airlines, origins and destinations, a memory block is additionally created, data stored in the memory block is a map, wherein key is a character string consisting of the airlines, the origins and the destinations, and value is a set of corresponding data object pointers. The memory block is mapped and serialized by mmap mode, thus forming an index of ClassA according to the airline company, origin and destination. Similarly, a different index may be built for each type of data.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Although the operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Based on the memory management method provided in the foregoing embodiment, an embodiment of the present application further provides a memory management device, referring to fig. 5, where fig. 5 is a block diagram of a structure of the memory management device provided in the embodiment of the present application, where the device includes an obtaining unit 501, a first determining unit 502, a first responding unit 503, a second determining unit 504, and a storing unit 505:
the acquiring unit 501 is configured to acquire data to be stored, where the data to be stored has a corresponding target data type;
the first determining unit 502 is configured to determine a preset memory corresponding to the target data type;
the first response unit 503 is configured to, in response to that the data amount of the to-be-stored data exceeds the storable data amount corresponding to the preset memory, obtain an unused memory meeting the data amount of the to-be-stored data from an unused memory set, where the unused memory has consecutive memory addresses;
the second determining unit 504 is configured to determine, according to the unused memory, a target memory corresponding to the target data type;
the storage unit 505 is configured to store the data to be stored in the target memory.
In a possible implementation manner, the unused memory sets include an unused continuous memory set and an unused discontinuous memory set, where the continuous memory refers to a memory whose storable data amount is an integer multiple of a unit stored data amount, the discontinuous memory refers to a memory whose storable data amount is not an integer multiple of a unit stored data amount, and the first response unit 503 is specifically configured to:
determining whether an unused memory meeting the data volume of the data to be stored exists in the unused continuous memory set;
if yes, acquiring the unused memory from the unused continuous memory set;
and if not, acquiring the unused memory from the unused discontinuous memory set.
In one possible implementation, the apparatus further includes a first splicing unit:
the first splicing unit is configured to splice the non-contiguous memories in the unused non-contiguous memory set according to a memory address, where the splicing is configured to merge multiple non-contiguous memories with adjacent memory addresses into one memory.
In a possible implementation manner, the unused discontinuous memory set includes a target discontinuous memory, and the apparatus further includes a second response unit:
the second responding unit is configured to transfer the target discontinuous memory into the unused continuous memory set in response to that the storable data amount of the target discontinuous memory is an integral multiple of the unit storage data amount.
In a possible implementation manner, the apparatus further includes a third determining unit, a third responding unit, a fourth determining unit, and a storing unit:
the third determining unit is configured to determine an actual data amount occupied by the data to be stored in the target memory after the data to be stored is stored in the target memory;
the third response unit is used for responding to the fact that the actual data volume is smaller than the unit storage data volume and determining the tail address of the memory address occupied by the data to be stored;
the fourth determining unit is configured to segment the target memory according to the tail address and determine a target unused memory in the target memory;
and the storage unit is used for storing the target unused memory into the unused memory set.
In a possible implementation manner, the preset memory includes stored data in the stored target data type, and the apparatus further includes a transfer unit:
the transfer unit is configured to transfer the stored data to the target memory.
In a possible implementation manner, the unused memory is adjacent to a memory address of the preset memory, and the second determining unit 504 is configured to:
and splicing the unused memory and the preset memory based on memory addresses, and determining a target memory corresponding to the target data type.
In one possible implementation, the apparatus further includes a changing unit:
the changing unit is used for changing the preset memory into an unused memory.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The present application further provides a computer device, the device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute any one of the memory management methods according to an instruction in the program code.
The embodiment of the present application further provides a computer-readable storage medium, configured to store a computer program, where the computer program is configured to execute any implementation manner of the memory management method described in the foregoing embodiments.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The present application further provides a computer program product including instructions, which when run on a computer, causes the computer to execute the memory management method provided in any one of the above embodiments.
For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or installed from the storage means 606, or installed from the ROM 602, as shown in fig. 6. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
According to one or more embodiments of the present disclosure, there is provided a memory management method, including:
acquiring data to be stored, wherein the data to be stored has a corresponding target data type;
determining a preset memory corresponding to the target data type;
in response to that the data volume of the data to be stored exceeds the storable data volume corresponding to the preset memory, acquiring an unused memory meeting the data volume of the data to be stored from an unused memory set, wherein the unused memory has continuous memory addresses;
determining a target memory corresponding to the target data type according to the unused memory;
and storing the data to be stored through the target memory.
In a possible implementation manner, the unused memory set includes an unused continuous memory set and an unused non-continuous memory set, where the continuous memory refers to a memory whose storable data amount is an integer multiple of a unit stored data amount, the non-continuous memory refers to a memory whose storable data amount is not an integer multiple of a unit stored data amount, and the obtaining of the unused memory that satisfies the data amount of the data to be stored from the unused memory set includes:
determining whether an unused memory meeting the data volume of the data to be stored exists in the unused continuous memory set;
if yes, acquiring the unused memory from the unused continuous memory set;
and if not, acquiring the unused memory from the unused discontinuous memory set.
In one possible implementation, the method further includes:
and according to the memory address, performing splicing processing on the non-continuous memories in the unused non-continuous memory set, wherein the splicing processing is used for combining a plurality of non-continuous memories adjacent to the memory address into one memory.
In one possible implementation, the unused non-contiguous memory set includes a target non-contiguous memory, and the method further includes:
and in response to that the storable data quantity of the target discontinuous memory is integral multiple of the unit storage data quantity, transferring the target discontinuous memory into the unused continuous memory set.
In one possible implementation, the method further includes:
determining the actual data volume occupied by the data to be stored in the target memory after the data to be stored is stored in the target memory;
determining a tail address of a memory address occupied by the data to be stored in response to the fact that the actual data volume is smaller than the unit storage data volume;
segmenting the target memory according to the tail address, and determining a target unused memory in the target memory;
and storing the target unused memory into the unused memory set.
In a possible implementation manner, the preset memory includes stored data in the stored target data type, and the method further includes:
and transferring the stored data to the target memory.
In a possible implementation manner, the determining, by the unused memory and the target memory corresponding to the target data type, that the unused memory is adjacent to a memory address of the preset memory includes:
and splicing the unused memory and the preset memory based on memory addresses, and determining a target memory corresponding to the target data type.
In one possible implementation, the method further includes:
and changing the preset memory into an unused memory.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
While several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
It should be noted that, in the present specification, all the embodiments are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus and system embodiments, since they are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described embodiments of the apparatus and system are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only one specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A memory management method, the method comprising:
acquiring data to be stored, wherein the data to be stored has a corresponding target data type;
determining a preset memory corresponding to the target data type;
in response to that the data volume of the data to be stored exceeds the storable data volume corresponding to the preset memory, acquiring an unused memory meeting the data volume of the data to be stored from an unused memory set, wherein the unused memory has continuous memory addresses;
determining a target memory corresponding to the target data type according to the unused memory;
and storing the data to be stored through the target memory.
2. The method according to claim 1, wherein the unused memory sets include an unused continuous memory set and an unused non-continuous memory set, the continuous memory refers to a memory whose storable data amount is an integer multiple of a unit stored data amount, the non-continuous memory refers to a memory whose storable data amount is not an integer multiple of a unit stored data amount, and the obtaining of the unused memory satisfying the data amount of the data to be stored from the unused memory set includes:
determining whether an unused memory meeting the data volume of the data to be stored exists in the unused continuous memory set;
if yes, acquiring the unused memory from the unused continuous memory set;
and if not, acquiring the unused memory from the unused discontinuous memory set.
3. The method of claim 2, further comprising:
and according to the memory address, performing splicing processing on the non-continuous memories in the unused non-continuous memory set, wherein the splicing processing is used for combining a plurality of non-continuous memories adjacent to the memory address into one memory.
4. The method of claim 2, wherein the set of unused non-contiguous memory comprises a target non-contiguous memory, the method further comprising:
and in response to that the storable data quantity of the target discontinuous memory is integral multiple of the unit storage data quantity, transferring the target discontinuous memory into the unused continuous memory set.
5. The method of claim 1, further comprising:
determining the actual data volume occupied by the data to be stored in the target memory after the data to be stored is stored in the target memory;
determining a tail address of a memory address occupied by the data to be stored in response to the fact that the actual data volume is smaller than the unit storage data volume;
segmenting the target memory according to the tail address, and determining a target unused memory in the target memory;
and storing the target unused memory into the unused memory set.
6. The method of claim 1, wherein the default memory includes stored data of the stored target data type, and the method further comprises:
and transferring the stored data to the target memory.
7. The method of claim 1, wherein the unused memory is adjacent to a memory address of the preset memory, and the determining the target memory corresponding to the target data type according to the unused memory comprises:
and splicing the unused memory and the preset memory based on memory addresses, and determining a target memory corresponding to the target data type.
8. The method of claim 1, further comprising:
and changing the preset memory into an unused memory.
9. A memory management device is characterized by comprising an acquisition unit, a first determination unit, a first response unit, a second determination unit and a storage unit:
the acquisition unit is used for acquiring data to be stored, and the data to be stored has a corresponding target data type;
the first determining unit is configured to determine a preset memory corresponding to the target data type;
the first response unit is configured to, in response to that the data amount of the to-be-stored data exceeds the storable data amount corresponding to the preset memory, acquire an unused memory that satisfies the data amount of the to-be-stored data from an unused memory set, where the unused memory has consecutive memory addresses;
the second determining unit is configured to determine, according to the unused memory, a target memory corresponding to the target data type;
and the storage unit is used for storing the data to be stored through the target memory.
10. A computer device, the device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the memory management method according to any one of claims 1 to 8 according to instructions in the program code.
11. A computer-readable storage medium for storing a computer program for executing the memory management method according to any one of claims 1 to 8.
12. A computer program product comprising instructions which, when run on a computer, cause the computer to perform the memory management method of any one of claims 1 to 8.
CN202111661779.8A 2021-12-30 2021-12-30 Memory management method and related device Pending CN114356795A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111661779.8A CN114356795A (en) 2021-12-30 2021-12-30 Memory management method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111661779.8A CN114356795A (en) 2021-12-30 2021-12-30 Memory management method and related device

Publications (1)

Publication Number Publication Date
CN114356795A true CN114356795A (en) 2022-04-15

Family

ID=81104628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111661779.8A Pending CN114356795A (en) 2021-12-30 2021-12-30 Memory management method and related device

Country Status (1)

Country Link
CN (1) CN114356795A (en)

Similar Documents

Publication Publication Date Title
CN102934094B (en) Hierarchical allocation for file system storage device
CN107066498B (en) Key value KV storage method and device
US9009273B2 (en) Address server
CN109690498B (en) Memory management method and equipment
CN108121813B (en) Data management method, device, system, storage medium and electronic equipment
CN104731799A (en) Memory database management device
US20100030994A1 (en) Methods, systems, and computer readable media for memory allocation and deallocation
CN111177017B (en) Memory allocation method and device
CN110109868A (en) Method, apparatus and computer program product for index file
WO2017050064A1 (en) Memory management method and device for shared memory database
CN105744001A (en) Distributed Caching System Expanding Method, Data Access Method, and Device and System of the Same
WO2015124117A1 (en) System and method for an efficient database storage model based on sparse files
CN110968417A (en) Method, apparatus, system and computer program product for managing storage units
CN108984102B (en) Method, system and computer program product for managing a storage system
CN103778120A (en) Global file identification generation method, generation device and corresponding distributed file system
CN111190537B (en) Method and system for managing sequential storage disk in additional writing scene
CN113805816B (en) Disk space management method, device, equipment and storage medium
CN105718319A (en) Memory pool territory analysis method and memory pool device
WO2016106757A1 (en) Method for managing storage data, storage manager and storage system
CN112596949B (en) High-efficiency SSD (solid State disk) deleted data recovery method and system
CN112650577A (en) Memory management method and device
KR20090007926A (en) Apparatus and method for managing index of data stored in flash memory
CN114356795A (en) Memory management method and related device
CN107111549B (en) File system management method and device
CN104252415B (en) Method and system for redistributing data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination