US9367441B2 - Method for managing physical memory of a data storage and data storage management system - Google Patents
Method for managing physical memory of a data storage and data storage management system Download PDFInfo
- Publication number
- US9367441B2 US9367441B2 US13/399,010 US201213399010A US9367441B2 US 9367441 B2 US9367441 B2 US 9367441B2 US 201213399010 A US201213399010 A US 201213399010A US 9367441 B2 US9367441 B2 US 9367441B2
- Authority
- US
- United States
- Prior art keywords
- access information
- memory
- pool
- memory block
- data storage
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
Definitions
- the physical memory of the heap is split in 4 kbyte sized memory portions.
- the physical memory is mapped to a virtual memory, wherein every operation, process or application may allocate or access the virtual memory.
- the virtual memory may simulate a continuous section of a memory which however may be mapped to a large number of 4 kbyte memory (separated) portions in the physical memory, which may be distributed across the physical memory, such that they may be interspersed with not allocated memory portions. For every access to the memory a mapping from the virtual memory to the physical memory needs to be performed which slows the operation down.
- the present invention relates to a method for managing physical memory of a data storage and to a data storage management system.
- the present invention relates to a method for managing physical memory of a data storage and to a data storage management system, wherein a reliability and/or availability and/or determinism and/or a speed of allocation and/or access of the physical memory is improved compared to conventional methods or systems.
- memory management may be of a particular concern.
- the real-time application may comprise measuring a physical quantity, such as a wind speed, a temperature, a vibration, an amount of electric power, an electric voltage, an electric current or the like.
- the real-time application may run to control and/or monitor a power plant, such as a power generation system, in particular a wind turbine system.
- the wind turbine may be required to be controlled depending on one or more measuring data measuring physical quantities, such as the wind speed, the temperature, the amount of electric energy produced, a frequency of a utility grid connected to the wind turbine and the like.
- the application may access or allocate a portion of the heap, which may be a memory area reserved for dynamic memory allocation and memory de-allocation.
- the physical memory of the heap is split in 4 kbyte sized memory portions.
- the physical memory is mapped to a virtual memory, wherein every operation, process or application may allocate or access the virtual memory.
- the virtual memory may simulate a continuous section of a memory which however may be mapped to a large number of 4 kbyte memory (separated) portions in the physical memory, which may be distributed across the physical memory, such that they may be interspersed with not allocated memory portions. For every access to the memory a mapping from the virtual memory to the physical memory needs to be performed which slows the operation down.
- a method for managing physical memory of a data storage in particular a heap, and for a data storage management system, wherein allocating and/or de-allocating and/or accessing a memory portion may be improved, in particular more efficient compared to conventionally known methods or systems.
- a method for managing physical memory of a data storage in particular a heap, and for a data storage management system, in particular comprising a heap, wherein allocation and/or access of a memory portion may be performed in a predictable time span.
- a method (which may for example be implemented in software, such as using a high level computer language, such as C++, JAVA, C, or which may be implemented by a low level computer language, such as assembler, or wherein the method is implemented in hardware, such as for example by providing an application specific integrated circuit (ASIC)) for managing (in particular for allocating, accessing, de-allocating and/or assigning) physical memory (in particular comprising a plurality of memory cells which may be addressed or referred to by associated unambiguous addresses) of a data storage (for example a portion of a RAM, in particular a heap, or a harddisk, or a flash memory, or any other device which is adapted to store data in an electronic form), in particular a heap (in particular representing an electronic memory area reserved for dynamic memory allocation and/or de-allocation), wherein the method comprises requesting (in particular by a requester, such as a computer process, a computer application, a thread running on the computer, or the like) a memory
- ASIC
- the method for managing physical memory of the data storage further comprises identifying (in particular comprising determining, calculating or/and deriving) a pool (in particular a data pool, in particular a data container being provided for storing electronic data, in particular provided for storing addresses of the data storage, wherein each address points to a particular memory element or memory cell of the data storage), wherein the pool is provided for storing a number of instances of access information (in particular addresses of the physical memory of the data storage, wherein the access information is indicative of an address to an index portion of the data storage, wherein the length of the index portion may for example be 1 or more bytes, e.g.
- the access information is indicative of an address (in the physical memory of the data storage) of a memory block of the data storage (the address of the memory block in particular being a start address of the memory block, wherein at this start address the memory block starts, wherein the memory block may span a plurality of addresses starting from the start address and extending up to an end address, wherein the extent of the memory block (the difference between the end address and the start address) may correspond to the memory block size, such as a memory block size of a particular number of bytes), wherein the memory block has a memory block size equal to or larger than the memory portion size (such that the requester, requesting memory having the memory portion size is able to use the memory block to hold or store data the application needs to store).
- the memory block may be allocated to the requester and may be reserved by the data storage management system performing the method for managing physical memory of the data storage such that no other application or no other process may access the reserved memory block. Thereby, data integrity may be ensured.
- the access information may be or comprise the address (in the physical memory of the data storage) of the extended memory block of the data storage or the address of the memory block of the data storage.
- the address of the memory block of the data storage may be derivable from the access information.
- the method for managing physical memory of the data storage comprises determining whether access information is stored in a pool. Further, the method comprises, if the access information is stored in a pool (which may indicate that this particular memory block has previously been allocated by the same or by another process or application but which has been released again, since the memory block was not used any more by this process or by this application), returning address data of the memory block (in particular returning the address data to the requester requesting the memory portion, in particular returning the address data to the process or the application requesting the memory portion), wherein the address data are based on the access information (such that using the access information the address data are derivable or obtainable, for example the address data may represent an address of the physical memory of the data storage, which address is spaced apart from the access information (in particular also an address of the physical memory of the data storage) by a predetermined, fixed value, such as 1 byte, 2 bytes or more bytes) and removing the access information from the pool (in particular the access information may be deleted from the pool), thereby indicating that this memory
- the method comprises, if the access information is not stored in the pool (indicating that the memory block has not yet been allocated or accessed by any application or any process or any request before), creating the access information (which may comprise searching within the physical memory of the data storage for a portion which is still available, i.e. which has not been accessed by any requester), and returning address data of the memory block (in particular to the requester, such as an application or a process requesting the memory portion), wherein the address data are based on the access information.
- address data of an available memory block are returned to the requester, in particular an application or a process running on a computer, wherein the available memory block may be identified by the address data, wherein in particular the address data may be the address, where the memory block starts within the physical memory of the data storage.
- the address data may for example be any address within the memory block but being spaced apart from the start of the memory block or spaced apart from the end of the memory block by a predetermined amount of addresses.
- the address data may represent the start address of the memory block, thus simplifying the procedure for accessing the memory block by the requester.
- any memory block which has been allocated (in particular by returning the address data of the memory block to the requester) will be controlled by the method for managing physical memory, such that this memory block will only be assigned to any later requester in an not fragmented manner, such that the entire memory block may be allocated later on by any later requester.
- the method for managing physical memory of the data storage may ensure that the memory block will not be allocated in fragmented portions of the memory block.
- performing the method for managing physical memory of the data storage may avoid fragmentation of the physical memory of the data storage.
- a time span between the requesting the memory portion and the returning the address data may be predictable and may in particular be smaller than or equal to a predetermined allocation time span.
- real-time application or processes may be supported by performing the method for managing physical memory of the data storage.
- the memory block comprises a continuous physical memory section of the data storage (such that the memory block may be formed by a number of adjacent memory segments or memory elements), wherein the physical memory section has the memory block size, wherein in particular the memory block is formed by physically consecutive memory cells (in particular by memory cells which are being addressed by consecutive addresses).
- accessing portions of the memory block or the entire memory block may be accelerated and/or simplified. Thereby, not only the memory allocation time span but also the memory access time span may in particular be reduced.
- the creating the access information is based on a start address (a physical start address) of an available portion (which portion has not been allocated before, which may therefore represent a free memory portion) of the data storage and wherein the creating the access information further comprises changing the start address of the available portion based on the memory block size.
- the start address of an available portion of the data storage may be considered as a pointer pointing to a particular address of the data storage, wherein for addresses larger (or smaller) than the start address the memory may be free, i.e. available for the requester. Whenever the address data of the memory block are returned to the requester, the pointer pointing to the start address from which on there is available memory will be shifted, since the available portion of the data storage has been diminished due to the allocation of the memory block to the requester.
- the start address of the available portion of the data storage may be stored in a data structure which may be accessed.
- the start address may be read out, whenever a further request for a memory portion is received and wherein there is no access information in the corresponding pool which may be returned to the requester.
- a start address for the free area may be maintained such that it may not be necessary to search for a memory block large enough. If there is not enough free memory at the start address, then there may not be enough free memory to fulfill the request. If there is enough memory to fulfill the request then the start address may be changed according to the size necessary to fulfill the request such that the free area is shortened. The number of and/or the combination of earlier requests to get and/or release memory may therefore not affect the time it takes to determine if and where in the heap there is enough memory to fulfill the request. In particular, this can be determined fast and in constant time. Thereby this may accelerate returning the address data of the memory block to the requester.
- the creating the access information further comprises writing (in particular electronically modifying one or more memory cells) a pool index (which may enable to identify the pool) relating to the pool into the physical memory of the data storage at an index portion (in particular spanning one or more addresses) of the data storage, wherein an address of the index portion is based on the access information (wherein in particular a start address of the index portion may be the access information), wherein in particular the changing the start address of the available portion of the data storage is further based on a size of the index portion.
- the memory block size may be entirely available for the requester, while the index portion may represent a management overhead for performing the method for managing physical memory of the data storage.
- the index portion may be a small portion compared to the memory block size, in particular the index portion may require 1 byte, while the memory block size may be any size larger than zero.
- a sum of the index portion and the memory block size may, according to an embodiment, correspond to a number of bytes, wherein the number is a power of two.
- one or more further pools may be adapted to store addresses (access information) which relate to different memory block sizes, as will be explained in detail further below.
- the method for managing physical memory of the data storage further comprises releasing (in particular releasing by the requester or the application or the process), in particular by a requester having requested the memory portion, the memory portion (thereby in particular comprising receiving an information relating to the address of the released memory portion); and storing the access information in the pool.
- the access information may be derived from the memory portion, in particular from the address of the memory portion.
- the releasing the memory portion may represent that the requester does not require the memory portion any more, since for example variables or data objects accessing the memory portion have been deleted, in particular destructed, within the running application or the running process.
- the method for managing physical memory of the data storage further comprises determining the access information based on the address of the memory block (in particular, if the address of the memory block represents the start address of the memory block, the access information may be an address before or after the start address of the memory block, in particular the address immediately before the address of the memory block), wherein the storing the access information in the pool is based on the determined access information.
- a particular amount may be subtracted from the address of the memory block, to obtain the access information (in particular an address).
- the method further comprises, upon releasing the memory portion, maintaining (in particular keeping, thus not deleting) the pool index relating to the pool in the physical memory of the data storage at the index portion of the data storage (such that in particular the pool index is kept in the physical memory of the data storage, although the requester has released the memory portion).
- a pattern of differently sized memory portions being subsequently requested by one or more requesters may lead to a pattern of plural pool indices stored at addresses that are spaced apart corresponding to the plural memory blocks which may have different memory block sizes depending on the size of the memory portions being subsequently requested.
- This pattern of plural pool indices written at plural instances of access information may be maintained during performing the method, even if one or more memory portions are being released by the one or more requesters.
- the index portion in the data storage, is physically located adjacent (in particular immediately adjacent) to the memory block, in particular in a byte before or in a byte after the memory block.
- the index portion may be found based on the address of the memory block in a simple manner.
- the memory block in particular the address of the memory block, further in particular the start address of the memory block, may be found based on the address of the index portion, in particular based on the access information, in a simple manner.
- a performance in particular a speed of the method, may be increased.
- the identifying the pool comprises determining the pool index based on the access information.
- the pool index may be determined by accessing and reading the data stored at the access information (and spanning a number of addresses corresponding to the index portion).
- another access information (in particular also being an address within the physical memory of the data storage) is stored in the pool, wherein the other access information is indicative of another address of another physical memory block of the data storage, wherein the other memory block has the same memory block size as the memory block.
- the pool is adapted to store plural instances of access information (such as in particular addresses) including the access information and the other access information, wherein the pool is adapted such that the access information and the other access information are accessible in a time span which is independent of the number of the plural instances of access information stored in the pool.
- the pool may store the plural instances of access information in an array-like data structure, in a queue, in another suitable data container or in any other data structure that enables storing a number of elements and accessing the elements in constant time independent of the number of elements.
- the performance, in particular the speed, of the managing method may be increased.
- the method further comprises support for handling a request of a further memory portion having a further memory portion size; identifying a further pool, wherein the further pool is provided for storing at least one further access information indicative of a further address of a further memory block of the data storage, the further memory block having a further memory block size which is equal to or larger than the further memory portion size and different from the memory block size; determining whether the further access information is stored in the further pool; if the further access information is stored in the further pool, returning further address data of the further memory block, wherein the further address data are based on the further access information and removing the further access information from the further pool; if the further access information is not stored in the further pool, creating the further access information, and returning further address data of the further memory block, wherein the further address data are based on the further access information.
- another further access information may be stored in the further pool, wherein the other further access information is indicative of another further address of another further memory block of the data storage, wherein the other further memory block has the further memory block size.
- the other further access information in particular several instances of access information, may be stored, wherein each instance of the access information relates of a memory block having the same memory block size.
- the method further comprises defining a data container, in particular in the data storage or e.g. in a further data storage separate form the data storage, for storing plural pools, in particular a predetermined number of pools, wherein the pool and the further pool are stored in the data container, wherein the data container is adapted such that the pool and the further pool (and/or elements comprised in the pool and the further pool) may be accessed within a time span which is constant independent of the number of the plural pools stored in the container.
- the data container may be an array-like data structure or any other data container which allows access to any of its elements in a constant time independent of the number of elements stored within the data container.
- the pools can be of any type that can remove at least one of its element in constant time, and may be able to add an element in constant time.
- a data storage management system comprising a data storage for storing data; and a controller for controlling an access to the data storage, wherein the controller is adapted such that the controller performs the following method steps: receiving a request, the request requesting a memory portion having a memory portion size; identifying a pool, wherein the pool is provided for storing at least one access information indicative of an address of a memory block of the data storage, the memory block having the smallest possible fixed memory block size equal to or larger than the memory portion size; determining whether the access information is stored in the pool; if the access information is stored in the pool, returning address data of the memory block, wherein the address data are based on the access information and removing the access information from the pool; if the access information is not stored in the pool, creating the access information, and returning address data of the memory block, wherein the address data are based on the access information.
- the data storage may be at least a portion of a RAM, in particular a heap, a harddisk, a hard drive, a flash storage, a magneto-optical data storage or any other data storage for storing data, in particular electric and/or electronic data.
- the controller may be configured as an application specific integrated circuit, as (a part of) an operating system for operating a processor, in particular operating a computer. Further in particular, the controller may be an additional component or external device interfacing to the data storage and providing an access layer for accessing the data storage. Further in particular, the controller comprises a general purpose processor and a software element which may be executed by the processor.
- a power production system in particular a wind turbine system
- the data storage management system wherein the controller in particular controls access to the data storage, wherein access to the data storage is requested by an application for controlling and/or monitoring the wind turbine system, in particular for measuring a physical quantity, such as a wind speed, a temperature, a vibration and/or for controlling at least one component of the energy production system, such as a rotor blade, in particular a rotor blade pitch angle, an amount of energy produced or released by a converter of the wind turbine system or the like.
- a physical quantity such as a wind speed, a temperature, a vibration
- at least one component of the energy production system such as a rotor blade, in particular a rotor blade pitch angle, an amount of energy produced or released by a converter of the wind turbine system or the like.
- the method for managing physical memory of a data structure is performed in a real-time application, wherein in particular the real-time application requires that a result is correct and that the result is obtained within a predetermined time span.
- a program element and/or an electronically readable storage medium are provided, in particular harbouring the program element, wherein the program element, when executed by a processor, such as a computer, is adapted to carry out or control an embodiment of a method for managing physical memory of a data storage, as described above.
- pools of the needed sizes are build by the needs of the system it selves. This may be at a cost of a higher memory usage and introduction of a similar but smaller issue.
- the system needs more memory blocks of a given size than it has used since start-up and the unused heap has become too small to allocate a block of the proper size
- the total sum of unused memory blocks in pools and heap not in pools together may be enough for the request without being possible to return as a contiguous memory block. This could only happen, if the total amount of heap is not enough to create the number of different memory blocks that the system has been needed during its execution. If this happens, then it could degrade availability.
- One of the advantages of embodiments of this invention is that the number of large memory blocks (needed by the system) can not be reduced as time goes by, it can only increase.
- the number of allocated blocks of any given size is at any time equal to the highest number of blocks (of that size) that the system has been used since the system started. Compared to the common heap management strategies this solution can only grow in the number of allocated blocks of any given size.
- memory is allocated from one end of the heap and memory is only allocated in continuation hereof (similar to next fit where memory is not released), and any memory is never returned to the free heap that has not yet been made into memory blocks.
- a memory block that is returned to the heap manager is stored in a pool instead. This ensures that external fragmentation of the heap (not made into memory blocks) is avoided.
- Memory may always be allocated in one of more possible fixed sizes. A pool may be created for each possible fixed block size.
- Each pool may only be used for addresses of one specific block size, and no two pools have addresses to blocks of the same size.
- a memory block address is returned to the heap manager, then it may be stored in the pool for that memory block size.
- Embodiments of the invention may provide a heap manager that is a piece of software (and/or hardware) that is used to manage heap.
- FIGURE schematically illustrates a data storage management system according to an embodiment of the present invention, wherein the data storage management system is adapted to perform a method for managing physical memory of the data storage according to an embodiment of the present invention.
- the data storage management system 100 comprises a data storage 101 for storing data, wherein the data storage may in particular be a heap.
- the data storage 101 comprises plural memory cells which may be accessed using unambiguous memory cell addresses, wherein the memory cells are arranged in a consecutive manner and the plurality of memory cells is depicted as a line 103 .
- the data storage 101 comprises an accessing component 105 which may access any of the memory cells by referring to each of the memory cells using an address in the data storage 101 .
- the addresses are numbers arranged in an increasing order, for accessing particular memory cells by the accessing device 105 . Thereby, the accessing device 105 may move along the plural memory cells 103 as indicated by the double arrow 107 .
- the accessing device 105 may be a purely electronic device having no mechanical components.
- the accessing device 105 may be a control circuit adapted for accessing different memory cells of a memory chip.
- the accessing device 105 may comprise mechanical and/or electronic components.
- the data storage management system 100 further comprises a controller 109 for controlling an access and/or allocation of portions of the data storage 101 .
- a process or an application 111 may request a memory portion (having a particular memory portion size) from the controller 109 . Thereby, the application 111 may transmit a request 113 for this memory portion to the controller 109 .
- the controller 109 will be adapted to maintain a data container of plural pools, wherein each of the pools is provided for storing access information relating to a memory block having a particular memory block size.
- the controller 109 may keep a data container 115 , wherein an array index i 0 , i 1 , i 2 , i 3 , . . . identifies a pool, wherein in each pool has a number of instances of access information, such as ai 1 , ai 2 , ai 3 , ai 4 , ai 5 , ai 6 , and ai 7 may be stored.
- the pool identified by the index i 0 stores instances of access information ai 4 , wherein this access information ai 4 relates to memory block m 4 having a memory block size of 2 bytes.
- the index i 1 identifies a pool which stores or may store instances of access information ai 2 , ai 3 , ai 5 , and ai 7 , wherein these instances of access information relate to memory blocks m 2 , m 3 , m 5 and m 7 , respectively, which all have the same memory block size of 4 bytes.
- the index i 2 identifies a pool in which instances of access information ai 1 , ai 6 may be stored, wherein these access information instances relate to memory blocks m 1 , and m 6 , respectively, which both have a memory block size of 8 bytes.
- controller 109 may comprise in the data container 115 further pools having indices i 3 , i 4 , . . . identifying the further pools relating to memory blocks within the data storage 101 which have even larger memory block sizes.
- the application 111 may first request a memory portion having a memory portion size of for example 6 bytes.
- the controller 109 will identify a pool which is provided for storing access information all indicative of an address a 1 of a memory block m 1 of the data storage 101 , wherein the memory block m 1 has a memory block size of 7 bytes which is larger than the requested memory portion size having a size of 6 bytes.
- the controller 109 Upon receiving the request 113 for a memory portion having a memory portion size of 6 bytes the controller 109 will allocate memory cells 103 in the data storage 101 from one of the ends of the memory cells 103 in the data storage 101 , since at one of the ends the start pointer 118 (described in detail below) will be positioned. Alternatively, allocation could be from the end and going backwards instead.
- the controller 109 will then write, into the data storage 101 , starting at the address ai 1 at an index portion i 2 (the index i 2 of the pool which is provided for storing access information relating to memory blocks having a size of 7 bytes).
- the start address a 1 of the memory block m 1 may be obtained from the access information ai 1 by adding the length of the index portion i 2 to the address ai 1 .
- the thus determined start address a 1 of the memory block m 1 is then returned via the response 117 to the requesting application 111 . Further, the index i 2 is written into the index portion.
- the access information ai 1 is not added to the pool identified by the index i 2 .
- the application 111 may request a memory portion having a memory portion size of 2 bytes.
- the controller 109 Upon receiving the request from the application 111 the controller 109 will identify the pool labelled or indexed by the index i 1 , wherein this pool is provided for storing instances of access information relating to memory blocks having a memory block size of 3 bytes. Further, the index i 1 is written in the index portion starting at the access information ai 2 . Further, the address a 2 is returned to the application 101 using the response 117 .
- the controller 109 may receive further requests for memory portions from the application 111 or from several other applications or processes running on a computer. In particular, when a new request for a memory portion is received by the controller 109 the controller 109 first look in the appropriate pool identified by one of the indices i 0 , i 1 , i 2 , . . . for an access information which is related to a memory block having a sufficient memory block size to satisfy the request for memory.
- the controller 109 determines if the data storage starting at a start address 118 of available memory indicating a start of an available section of the data storage 101 is large enough.
- the start address 118 of available memory is dynamically adapted (shifted in the direction of the unused end, in this case to the right in the FIGURE), whenever the controller 109 returns an address or address data a 1 , a 2 , a 3 , a 4 , a 5 , a 6 , a 7 to the application 111 .
- determining the location for an available or free memory portion may be performed in a predictable time span, in particular it may be done in constant time.
- the application may from time to time release a memory portions which have previously been used by the application 111 .
- the application may release a memory portion stored in the memory block m 2 .
- the application 101 may transmit the address data a 2 to the controller 109 .
- the controller 109 may determine the access information ai 2 and the controller 109 may look into the data storage at the address ai 2 to retrieve the index i 1 .
- the access information ai 2 will be stored in the pool identified by the index i 1 .
- the application 111 may release a memory portion which is stored in the memory block m 6 which starts at the address a 6 .
- the application 111 may transmit the address a 6 to the controller 109 .
- the controller 109 may calculate the access information ai 6 , for example by subtracting 1 byte from the address a 6 , in particular by subtracting from the address a 6 the width or size of the index portion i 2 .
- the controller 109 will access the data storage 101 at the address ai 6 and will read the index i 2 being stored at this address.
- the access information or address ai 6 will be stored in the pool identified by the index i 2 , as is illustrated in the FIGURE.
- the application 111 or any other application or process may send a request 113 to the controller 109 requesting a memory portion having a size of 4 to 7 bytes.
- the controller 109 will first identify the pool which is designed for storing access information relating to memory blocks having the least size that is sufficient and will identify the pool indexed by the index i 2 .
- the controller 109 will then look, whether the pool indexed by index i 2 contains an access information.
- the controller 109 will find the access information ai 6 . From the access information or address ai 6 the controller 109 will determine or calculate the address a 6 being the start address of the memory block m 6 having the memory block size 7.
- the memory block m 6 is sufficiently large to satisfy the request 113 requesting a memory portion having a size of 4 to 7 bytes. Thereupon, the controller 109 will return the address a 6 to the application 111 and will also remove the access information ai 6 from the pool identified by the index i 2 .
- the data container 115 and in particular the different pools indexed by the indices i 0 , i 1 , i 2 , i 3 , . . . will contain dynamically changing access information instances, wherein only a particular situation is depicted in the FIGURE.
- the pools may be stored in an array like structure 115 (or other structure with constant time access).
- the possible memory block sizes (and their matching pools) may be ordered by an algorithm so that the array index i 1 , i 2 , i 3 , . . . to a pool of a certain memory block size may be calculated in constant time.
- This invention may not require any specific algorithm to be used; it only requires that the algorithm can be executed in constant time. The implementation could even allow for configuration of algorithm.
- An array index to the matching pool may be stored in the first byte(s) of each extended memory block (could have been the last byte(s) instead).
- An extended memory block may be the memory block extended (at the start or the end) with the index portion. For the requester to be able to store data in the memory block, the size has to be at least index-size+1, but a larger smallest memory block size could be chosen, to reduce overhead. Other considerations that could be taken into account (but not limited to):
- the size of the index portion i 1 , i 2 , i 3 , i 4 , i 5 , i 6 , i 7 is subtracted to get the address (or access information) ai 1 , ai 2 , ai 3 , ai 4 , ai 5 , ai 6 , ai 7 of the index to the entrance of the array that has the matching pool.
- the address of the index represents the address of the extended memory block.
- the extended memory block address i.e.
- the access information ai 1 , ai 2 , ai 3 , ai 4 , ai 5 , ai 6 , ai 7 is then stored in the matching pool.
- the pool is stored or maintained in a container 115 that is able to access its elements in constant time.
- the pool is a container that is able to add an element in constant time and remove at least one of its elements in constant time.
- the index i 1 , i 2 , i 3 , . . . of the array entrance is stored in the start of each memory block, no searching is necessary, and choosing the right pool can be done in constant time.
- the pool for the smallest block size that is big enough for the request+the size of the index is checked for emptiness. If the pool is empty, then a new memory block of that size is created from available heap 120 .
- the array index is written, and the address a 1 , a 2 , a 3 , a 4 , a 5 , a 6 , a 7 of the first byte after the index is returned to the requester.
- any of the memory addresses ai 1 , ai 2 , ai 3 , ai 4 , ai 5 , ai 6 , ai 7 is removed from the corresponding pool and the address a 1 , a 2 , a 3 , a 4 , a 5 , a 6 , a 7 of the first byte after the index in the block is returned to the requester.
- An example of the invention uses an algorithm where the first pool contains extended memory blocks of size 2 and each subsequent pool is for extended memory blocks of twice the size of the previous pool. But other algorithm could be used as well. It is important that the algorithm can calculate the index to the appropriate pool based on the size of the memory request in constant time. Other considerations that could be taken into account (but not limited to):
- index 1 is for extended memory blocks of size 4, and so on:
- a request for 1 byte should look for a free memory block address in the pool that is in the array entrance 0. [2 . . . 3] byte request in entrance 1. [4 . . . 7] byte request in entrance 2 and so on.
- Entrance 255 would contain addresses to memory blocks of 1,15792E+77 bytes which should be sufficient for the near future. When it is not sufficient or if an algorithm with a less steep curve is used, then an index of 2 bytes with an array of up to 65536 entrances could be used. If still more is needed 4 bytes with index up to 4294967296 entrances could be used (and so on).
- the array 115 of pools could be limited to only contain sizes that are possible with the given amount of heap.
- Waste of memory blocks in use will on average be below 25% for the example algorithm, but the solution is not limited to the example algorithm. Waste/internal fragmentation could easily be reduced to below 1% on average using a less steep curve for increase in block sizes, but at the cost of narrowing the average different request sizes, memory blocks can fulfil. It is therefore a sub optimization that could be adjusted for the purpose of the specific systems need.
- Availability is not lowered caused by external fragmentation of heap. This issue has been handled in a way where pools of the needed sizes are build by the needs of the system it selves and avoid the size and number of large memory blocks to be lowered by external fragmentation of the heap. Running the system for longer periods of time does not change that fact.
- the solution does not introduce non-determinism as a bad side effect on the use of memory given to the system as memory mapping (used by QNX and others) does.
- the solution provides an easier and more reliable way of testability for RAM need. As external fragmentation of heap has been eliminated, a specific set of needed memory blocks can not be unavailable caused by the system being running for longer period of time (changed fragmentation). Only if the system needs more memory blocks of a size than it has been used earlier and if the free heap (that is guaranteed to be contiguous) is not large enough for the new request, then the solution will not be able to fulfil the request. If the needed extra memory is installed in the system, the same situation is guaranteed not to happen again, and running the system for longer periods of time will still not change that fact (as fragmentation is eliminated).
- An embodiment of the invention automatically scales with the changes that are made to the software, as the size of the pools are built by the needs of the software when it executes.
- the developer does not have to use time for creation or maintenance of pools.
- the developer does not have to consideration the sizes of the pools when the software is changed.
- the solution automatically scales to the different needs that are at the different installations, within the possibility of the, at any time available, RAM installed.
- heap manager When the system (heap manager) is restarted when the needed memory blocks fundamentally change, then the heap manager will adapt and scale to the different needs that may evolve over time at any installation. Normally this would only happen if the problem domain that the system supervises/controls also fundamentally change, and in such situations it is likely that the system is restarted anyway.
- An embodiment of the invention could be a one time implementation that could be used for many systems.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
-
- Higher minimum size may reduce overhead, and increase waste (on average for used memory blocks)
- Higher minimum size may also reduce the number of pools and therefore may make the pool with the smallest size more flexible as more different sizes fall within this pool. Minimum size could also be configurable.
-
- Increased waste of blocks in use.
- Higher chance for an unused block to be usable again.
- Fewer unused memory blocks on average.
f(x)=2^(x+1)
Where x is the index into the array, and f(x) returns the memory block size that the pool is used for.
Array index | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | . . . |
exended Mem. block size | 2 | 4 | 8 | 16 | 32 | 64 | 128 | 256 | 512 | . . . |
Overhead for index | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | . . . |
Available storage | 1 | 3 | 7 | 15 | 31 | 63 | 127 | 255 | 511 | . . . |
f(x)=round_floor(log 2(x))
Where x is the number of bytes (the memory portion size) that the requester needs and f(x) returns the index to the array that has a pool for the smallest block size that is big enough. round_floor( ) truncates any digits after the decimal point. Examples:
x | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 |
f(x) | 0 | 1 | 1 | 2 | 2 | 2 | 2 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 4 |
x | 1000 | 1E+06 | 1E+09 | 1E+12 | 1E+15 | 1E+18 | 1E+21 | 1E+24 | 1E+27 | . . . |
f(x) | 9 | 19 | 29 | 39 | 49 | 59 | 69 | 79 | 89 | . . . |
Claims (16)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EPEP11155942 | 2011-02-25 | ||
EP11155942 | 2011-02-25 | ||
EP11155942A EP2492816A1 (en) | 2011-02-25 | 2011-02-25 | Method for managing physical memory of a data storage and data storage management system |
Publications (2)
Publication Number | Publication Date |
---|---|
US20120221805A1 US20120221805A1 (en) | 2012-08-30 |
US9367441B2 true US9367441B2 (en) | 2016-06-14 |
Family
ID=44359188
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/399,010 Expired - Fee Related US9367441B2 (en) | 2011-02-25 | 2012-02-17 | Method for managing physical memory of a data storage and data storage management system |
Country Status (4)
Country | Link |
---|---|
US (1) | US9367441B2 (en) |
EP (1) | EP2492816A1 (en) |
CN (1) | CN102693186A (en) |
CA (1) | CA2768956A1 (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102014112496A1 (en) * | 2014-08-29 | 2016-03-03 | Bundesdruckerei Gmbh | Memory management for a chip card |
US10516767B2 (en) * | 2016-04-18 | 2019-12-24 | Globalfoundries Inc. | Unifying realtime and static data for presenting over a web service |
US11113190B2 (en) * | 2016-11-11 | 2021-09-07 | Microsoft Technology Licensing, Llc | Mutable type builder |
US11537518B2 (en) * | 2017-09-26 | 2022-12-27 | Adobe Inc. | Constraining memory use for overlapping virtual memory operations |
CN110119637B (en) * | 2018-02-07 | 2023-04-14 | 联发科技股份有限公司 | Hardware control method and hardware control system |
KR20200106368A (en) * | 2019-03-04 | 2020-09-14 | 엘에스일렉트릭(주) | Apparatus And Method For Managing Memory Of Inverter |
CN114237501B (en) * | 2021-12-09 | 2024-02-27 | 北京美信时代科技有限公司 | Method for rapidly identifying cold data and computer readable storage medium |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020069338A1 (en) | 2000-12-01 | 2002-06-06 | Kadir Ozdemir | System and method for managing the memory in a computer system |
US6427195B1 (en) * | 2000-06-13 | 2002-07-30 | Hewlett-Packard Company | Thread local cache memory allocator in a multitasking operating system |
US6442661B1 (en) * | 2000-02-29 | 2002-08-27 | Quantum Corporation | Self-tuning memory management for computer systems |
US20020144073A1 (en) * | 2001-04-03 | 2002-10-03 | Ehud Trainin | Method for memory heap and buddy system management for service aware networks |
US20030145185A1 (en) | 2002-01-31 | 2003-07-31 | Christopher Lawton | Utilizing overhead in fixed length memory block pools |
US7039774B1 (en) * | 2002-02-05 | 2006-05-02 | Juniper Networks, Inc. | Memory allocation using a memory address pool |
US20060282644A1 (en) * | 2005-06-08 | 2006-12-14 | Micron Technology, Inc. | Robust index storage for non-volatile memory |
US20090006502A1 (en) * | 2007-06-26 | 2009-01-01 | Microsoft Corporation | Application-Specific Heap Management |
CN101414281A (en) | 2007-10-19 | 2009-04-22 | 大唐移动通信设备有限公司 | Internal memory management method and system |
US20090216988A1 (en) * | 2008-02-27 | 2009-08-27 | Michael Palladino | Low overhead memory management system and method |
US7603529B1 (en) * | 2006-03-22 | 2009-10-13 | Emc Corporation | Methods, systems, and computer program products for mapped logical unit (MLU) replications, storage, and retrieval in a redundant array of inexpensive disks (RAID) environment |
CN101702138A (en) | 2009-10-30 | 2010-05-05 | 深圳市新飞扬数码技术有限公司 | Memory management method, memory management system and server |
CN101763308A (en) | 2009-12-25 | 2010-06-30 | 中国科学院计算技术研究所 | Pool allocation method for heap data at running time |
US20100312984A1 (en) * | 2008-02-08 | 2010-12-09 | Freescale Semiconductor, Inc. | Memory management |
US20110125958A1 (en) * | 1999-10-21 | 2011-05-26 | Takuji Maeda | Semiconductor memory card access apparatus, a computer-readable recording medium, an initialization method, and a semiconductor memory card |
US20110246742A1 (en) * | 2010-04-01 | 2011-10-06 | Kogen Clark C | Memory pooling in segmented memory architecture |
-
2011
- 2011-02-25 EP EP11155942A patent/EP2492816A1/en not_active Withdrawn
-
2012
- 2012-02-17 US US13/399,010 patent/US9367441B2/en not_active Expired - Fee Related
- 2012-02-23 CA CA2768956A patent/CA2768956A1/en not_active Abandoned
- 2012-02-24 CN CN2012100465245A patent/CN102693186A/en active Pending
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110125958A1 (en) * | 1999-10-21 | 2011-05-26 | Takuji Maeda | Semiconductor memory card access apparatus, a computer-readable recording medium, an initialization method, and a semiconductor memory card |
US6442661B1 (en) * | 2000-02-29 | 2002-08-27 | Quantum Corporation | Self-tuning memory management for computer systems |
US6427195B1 (en) * | 2000-06-13 | 2002-07-30 | Hewlett-Packard Company | Thread local cache memory allocator in a multitasking operating system |
US20020069338A1 (en) | 2000-12-01 | 2002-06-06 | Kadir Ozdemir | System and method for managing the memory in a computer system |
US20020144073A1 (en) * | 2001-04-03 | 2002-10-03 | Ehud Trainin | Method for memory heap and buddy system management for service aware networks |
US20030145185A1 (en) | 2002-01-31 | 2003-07-31 | Christopher Lawton | Utilizing overhead in fixed length memory block pools |
US7039774B1 (en) * | 2002-02-05 | 2006-05-02 | Juniper Networks, Inc. | Memory allocation using a memory address pool |
US20060282644A1 (en) * | 2005-06-08 | 2006-12-14 | Micron Technology, Inc. | Robust index storage for non-volatile memory |
US7603529B1 (en) * | 2006-03-22 | 2009-10-13 | Emc Corporation | Methods, systems, and computer program products for mapped logical unit (MLU) replications, storage, and retrieval in a redundant array of inexpensive disks (RAID) environment |
US20090006502A1 (en) * | 2007-06-26 | 2009-01-01 | Microsoft Corporation | Application-Specific Heap Management |
CN101414281A (en) | 2007-10-19 | 2009-04-22 | 大唐移动通信设备有限公司 | Internal memory management method and system |
US20100312984A1 (en) * | 2008-02-08 | 2010-12-09 | Freescale Semiconductor, Inc. | Memory management |
US20090216988A1 (en) * | 2008-02-27 | 2009-08-27 | Michael Palladino | Low overhead memory management system and method |
CN101702138A (en) | 2009-10-30 | 2010-05-05 | 深圳市新飞扬数码技术有限公司 | Memory management method, memory management system and server |
CN101763308A (en) | 2009-12-25 | 2010-06-30 | 中国科学院计算技术研究所 | Pool allocation method for heap data at running time |
US20110246742A1 (en) * | 2010-04-01 | 2011-10-06 | Kogen Clark C | Memory pooling in segmented memory architecture |
Non-Patent Citations (1)
Title |
---|
ONX Software Systems; QNX Neutrino RTOS; http://www.qnx.com/products/neutrino-rtos/neutrino-rtos.html, Mar. 23, 2010. |
Also Published As
Publication number | Publication date |
---|---|
CA2768956A1 (en) | 2012-08-25 |
US20120221805A1 (en) | 2012-08-30 |
EP2492816A1 (en) | 2012-08-29 |
CN102693186A (en) | 2012-09-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9367441B2 (en) | Method for managing physical memory of a data storage and data storage management system | |
US10831387B1 (en) | Snapshot reservations in a distributed storage system | |
US9355028B2 (en) | Data-storage device and flash memory control method | |
CN1308793C (en) | Method and system for machine memory power and availability management | |
US9086952B2 (en) | Memory management and method for allocation using free-list | |
US20060288159A1 (en) | Method of controlling cache allocation | |
US8479205B2 (en) | Schedule control program and schedule control method | |
US6363468B1 (en) | System and method for allocating memory by partitioning a memory | |
US11403224B2 (en) | Method and system for managing buffer device in storage system | |
US8805896B2 (en) | System and method for use with garbage collected languages for enabling the allocated heap memory to be updated at runtime | |
JP5063069B2 (en) | Memory allocation method, apparatus, and program for multi-node computer | |
CN1289419A (en) | Compression store free-space management | |
CN112749135B (en) | Method, apparatus and computer program product for balancing storage space of a file system | |
US20150106565A1 (en) | Storage controlling apparatus, information processing apparatus, and computer-readable recording medium having stored therein storage controlling program | |
US8972629B2 (en) | Low-contention update buffer queuing for large systems | |
US10846143B2 (en) | Predicting capacity of shared virtual machine resources | |
US11403026B2 (en) | Method, device and computer program product for managing storage system | |
CN113961302A (en) | Resource allocation method, device, electronic equipment and storage medium | |
US9367439B2 (en) | Physical memory usage prediction | |
CN111506400A (en) | Computing resource allocation system, method, device and computer equipment | |
US9037622B1 (en) | System and method for managing spool space in a mixed SSD and HDD storage environment | |
CN118467172A (en) | Host memory management method, device, storage medium and server | |
CN118550717A (en) | Memory resource processing method, electronic equipment and storage medium | |
CN118363875A (en) | Memory recovery method, device, equipment, medium and product | |
CN118363879A (en) | Memory recovery method, device, equipment, medium and product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SIEMENS AKTIENGESELLSCHAFT, GERMANY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HYUL, IVAN SCHULTZ;REEL/FRAME:027722/0179 Effective date: 20111103 |
|
AS | Assignment |
Owner name: SIEMENS AKTIENGESELLSCHAFT, GERMANY Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNOR NAME MISPELLED: HYUL, IVAN SCHULTZ PREVIOUSLY RECORDED ON REEL 027722 FRAME 0179. ASSIGNOR(S) HEREBY CONFIRMS THE SHOULD BE: HJUL, IVAN SCHULTZ;ASSIGNOR:HJUL, IVAN SCHULTZ;REEL/FRAME:027729/0629 Effective date: 20111103 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Expired due to failure to pay maintenance fee |
Effective date: 20200614 |