WO2010070529A2 - A method, apparatus and computer program for moving data in memory - Google Patents

A method, apparatus and computer program for moving data in memory Download PDF

Info

Publication number
WO2010070529A2
WO2010070529A2 PCT/IB2009/055574 IB2009055574W WO2010070529A2 WO 2010070529 A2 WO2010070529 A2 WO 2010070529A2 IB 2009055574 W IB2009055574 W IB 2009055574W WO 2010070529 A2 WO2010070529 A2 WO 2010070529A2
Authority
WO
WIPO (PCT)
Prior art keywords
memory
data
moving
bank
page
Prior art date
Application number
PCT/IB2009/055574
Other languages
French (fr)
Other versions
WO2010070529A3 (en
Inventor
Richard Fitzgerald
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Publication of WO2010070529A2 publication Critical patent/WO2010070529A2/en
Publication of WO2010070529A3 publication Critical patent/WO2010070529A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1028Power efficiency
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • Examples of the present invention relate to a method, apparatus and computer program for compacting a random access memory of a device.
  • Memory which is write accessible may be prone to data fragmentation, which involves data stored in the memory being separated into smaller fragments of data which are distributed throughout the memory storage. Due to fragmentation, the data is less densely concentrated and more spread out across the memory space, resulting in smaller contiguous blocks of free memory. Data which is distributed across a memory space in this way is un- compacted, whereas data which is less fragmented and contiguous may be described as being compacted.
  • a first aspect of the invention provides a method comprising: assigning priorities to a plurality of memory banks in a memory resource; identifying at least one data fragment in the memory resource that is suitable for moving between the memory banks; and moving the identified data fragments into a higher priority memory bank, if there is space in the higher priority bank to do so.
  • a second aspect of the invention provides an apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: assign priorities to a plurality of memory banks in a memory resource; identify at least one data fragment in the memory resource that is suitable for moving between the memory banks; and move the identified data fragments into a higher priority memory bank, if there is space in the higher priority bank to do so.
  • a third aspect of the invention provides a computer program comprising: code for assigning priorities to a plurality of memory banks in a memory resource; code for identifying at least one data fragment in the memory resource that is suitable for moving between the memory banks; and code for moving the identified data fragments into a higher priority memory bank, if there is space in the higher priority bank to do so.
  • a fourth aspect of the invention provides a method comprising: identifying data fragments in a memory resource that are suitable for moving; and moving identified data fragments in a manner that compacts data within the memory resou rce, wherein identifying and moving are performed synchronously with one another.
  • a fifth aspect of the invention provides an apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: identify data fragments in a memory resource that are suitable for moving; and move identified data fragments in a manner that compacts data within the memory resou rce, wherei n identifying and moving are performed synchronously with one another.
  • a sixth aspect of the invention provides a computer program comprising: code for identifying data fragments in a memory resource that are suitable for moving; and code for moving identified data fragments in a manner that compacts data within the memory resource, wherein identifying and moving are performed synchronously with one another.
  • a seventh aspect of the invention provides a method comprising: identifying data fragments in a memory resource that are suitable for moving; and moving identified data fragments in a manner that compacts data within the memory resou rce, wherein identifying and moving are performed asynchronously with one another.
  • An eighth aspect of the invention provides an apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: identify data fragments in a memory resource that are suitable for moving; and move identified data fragments in a manner that compacts data within the memory resource, wherein identifying and moving are performed asynchronously with one another.
  • a ninth aspect of the invention provides a computer program comprising: code for identifying data fragments in a memory resource that are suitable for moving; and code for moving identified data fragments in a manner that compacts data within the memory resource, wherein identifying and moving are performed asynchronously with one another.
  • a tenth aspect of the invention provides a method comprising: copying a data fragment from a source location to a destination location to initiate a move operation on the data fragment, the source and destination locations being within a memory resource of an interrupt driven processing unit; monitoring whether any interrupts occur during the copying operation which suggest that the data fragment at the source location has been modified during the copy; aborting the move operation if an interrupt indicating modification did occur; and finalising the move operation if an interrupt indicating modification did not occur.
  • An eleventh aspect of the invention provides an apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: copy a data fragment from a source location to a destination location to initiate a move operation on the data fragment, the source and destination locations being within a memory resource of an interrupt driven processing unit; monitor whether any interrupts occur during the copying operation which suggest that the data fragment at the source location has been modified during the copy; abort the move operation if an interrupt indicating modification did occur; and finalise the move operation if an interrupt indicating modification did not occur.
  • a twelfth aspect of the invention provides a computer program comprising: code for copying a data fragment from a source location to a destination location to initiate a move operation on the data fragment, the source and destination locations being within a memory resource of an interrupt driven processing unit; code for monitoring whether any interrupts occur during the copying operation which suggest that the data fragment at the source location has been modified during the copy; code for aborting the move operation if an interrupt indicating modification did occur; and code for finalising the move operation if an interrupt indicating modification did not occur.
  • a thirteenth aspect of the invention provides a method comprising: identifying a type of data comprised in a memory page, copying the memory page from a source location to a destination location to initiate a move operation on the memory page, the source and destination locations being within a memory resource of an object oriented computer system; subsequent to the copying operation, calling a page move handling function of an object of the object oriented computer system, wherein the object used is determined by the identified type of data comprised in the memory page and the page move handling function of the object is configured to perform any remaining operations to move a page of the identified type.
  • a fourteenth aspect of the invention provides an apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: identify a type of data comprised in a memory page, copy the memory page from a source location to a destination location to initiate a move operation on the memory page, the source and destination locations being within a memory resource of an object oriented computer system, subsequent to the copying operation, call a page move handling function of an object of the object oriented computer system, wherein the object used is determined by the identified type of data comprised in the memory page and the page move handling function of the object is configured to perform any remaining operations to move a page of the identified type.
  • a fifteenth aspect of the invention provides a computer program comprising: code for identifying a type of data comprised in a memory page, code for copying the memory page from a source location to a destination location to initiate a move operation on the memory page, the source and destination locations being within a memory resource of an object oriented computer system, code for, subsequent to the copying operation, calling a page move handling function of an object of the object oriented computer system, wherein the object used is determined by the identified type of data comprised in the memory page and the page move handling function of the object is configured to perform any remaining operations to move a page of the identified type.
  • a sixteenth aspect of the invention provides a method comprising: assigning priorities to a plurality of memory banks in a memory resource; identifying at least one data fragment in the memory resource that is suitable for moving between the memory banks.
  • a seventeenth aspect of the invention provides an apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: assign priorities to a plurality of memory banks in a memory resource; identify at least one data fragment in the memory resource that is suitable for moving between the memory banks.
  • An eighteenth aspect of the invention provides a method comprising: moving data fragments of a memory resource, said data fragments having been identified as suitable for moving between memory banks of the memory resource, at least one memory bank having been assigned a priority, said data fragments being moved into a higher priority memory bank if there is space in the higher priority bank to do so.
  • a nineteenth aspect of the invention provides an apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: move data fragments of a memory resource, said data fragments having been identified as suitable for moving between memory banks of the memory resource, at least one memory bank having been assigned a priority, said data fragments being moved into a higher priority memory bank if there is space in the higher priority bank to do so.
  • figure 1 illustrates a memory containing some fragmented data
  • figure 2 illustrates the memory of figure 1 in which the fragmented data has been defragmented
  • figure 3 illustrates a mobile computing device
  • figure 4 illustrates a memory containing some fragmented data.
  • FIG 5 illustrates an example embodiment of the invention in which the search system and the defragmentation system are separate threads;
  • fig ure 6 i llustrates a memory containing some fragmented data, wherein each data fragment has been tagged by the search algorithm;
  • figure 7 illustrates the memory of figure 6, wherein the tags have been arranged in an ordered list;
  • figure 8 illustrates an example embodiment in which the search algorithm searches the memory banks in order of priority;
  • figure 9 is a flow diagram of the operation of the example embodiment of figure 8;
  • figure 10 illustrates an example embodiment in which the search algorithm searches the banks of a particular priority one or more times before moving onto the next one; and
  • figure 11 is a flow diagram of the operation of the example embodiment of figure 10.
  • RAM random access memory
  • the memory when a device begins to store data in a memory, the memory is empty and the device can choose to locate the data wherever it likes in the memory. However, as the memory fills up, the choice becomes reduced and the empty spaces in the memory may become too small to hold a single large piece of data.
  • the data is then split up into smaller pieces, and the smaller pieces are placed into the free spaces in the memory.
  • a record is kept of where each small piece of the data has been stored so that the pieces can be found and put back together when the data is needed by the device. For example, for storing files on a permanent storage medium (e.g. a hard disk), this process allows the hard disk to be used to store data files which are larger than any of the individual unused spaces on the hard disk.
  • a permanent storage medium e.g. a hard disk
  • defragmentation techniques operate by rearranging the data on a storage medium so that, as far as possible, all data relating to one process or file is contiguous. In this example embodiment, some free space is required on the storage medium in order to temporarily store data as it is moved during the defragmentation process.
  • a storage medium is shown containing files A, B, C, D, and E.
  • data A and E are fragmented into two pieces each: A1 and A2, and E1 and E2, respectively.
  • An example of a defragmentation algorithm is the following:
  • a sweep of the storage memory is performed to establish a map of the stored data and the fragmentations that have occurred.
  • Fragment A2 is moved to the end of fragment A1 to form contiguous data A.
  • the second data set B is then moved from the temporary location to a position immediately following A. 4) This process continues until all of the fragments have been rearranged to form contiguous data, and all data has been arranged contiguously in the medium.
  • FIG 2 shows the contiguous data arranged at the beginning of the storage medium to minimise future fragmentation.
  • figure 2 shows the data arranged contiguously, this is not always possible since certain files may be immovable.
  • defragmentation of a system's storage medium is scheduled by the user of the system to run at regular intervals.
  • the defragmentation process may be triggered when the system detects a certain level of fragmentation on the storage medium.
  • an additional advantage of defragmenting and compacting data is to save power.
  • large memory arrays are divided into regions called banks.
  • each bank can be placed into a low- power data retention mode independently of the other banks. While in this low-power mode the data content is preserved but is not accessible.
  • by compacting the data into the smallest possible number of active banks the number of banks which can be switched to low- power mode is maxim ized , and therefore the power consumption is minimised.
  • a mobile computing device uses system idle time to run RAM defragmentation processes. Similar to file system fragmentation, random access memory containing active processes and working data can become fragmented as data is frequently swapped in and out of RAM.
  • One advantage of this exemplary method of handling defragmentation is that the defragmentation is only performed whilst the device is idle, which prevents the defragmentation from having any visible effect on the performance of the device.
  • the defragmentation algorithm can be configured for different situations.
  • the example embodiment of Figure 3 shows a mobile computing device 100 with a central processing unit (CPU) 10, random access memory 30, and permanent storage medium 40.
  • CPU central processing unit
  • RAM random access memory
  • permanent storage medium are in communication with each other by system bus 20.
  • RAM compaction system 50 is a processor thread running on CPU 10 and has high level access to RAM 30.
  • the RAM compaction system may alternatively be run on a secondary processor or be implemented as a hardware layer between the CPU and the RAM.
  • the RAM 30 is spread across multiple memory banks.
  • each memory bank is assigned a priority.
  • the priority of the bank indicates the importance of keeping the bank powered up and in use.
  • lower priority memory banks are the memory banks which the system prefers to use the least, and so will work hardest to empty.
  • the priority of a memory bank may depend, for example, on the energy consumption used by the memory bank or the type of technology used in the memory bank. For example, if the energy consumed by the particular memory bank to retain data is relatively high, it may be assigned a low priority. In this example embodiment, if read and/or write access to the memory bank is relatively slow, it may again be assigned a low priority.
  • figure 4 shows just three memory banks: a low priority memory bank, a medium priority memory bank and a high priority memory bank.
  • any number of memory banks can be used.
  • Such example embodiments are not limited to just three discrete levels of priority.
  • other numbers of priority levels could be used.
  • every memory bank is assigned a different priority level (possibly arbitrarily, in the case where two banks are in reality equally important); the unique priority levels simpl ifying some implementations.
  • more than one bank may share the same priority level.
  • the operating system memory allocation algorithm will attempt to satisfy memory requests from the highest possible priority bank, moving to lower-priority banks until the total required amount of memory has been allocated. In this example embodiment, as the memory becomes fragmented it will become more difficult to satisfy memory requests from high priority banks.
  • defragmentation will rectify this by sorting the information in memory, based on bank priority.
  • memory will be moved out of the low priority/power hungry memory banks into high priority/low power usage memory banks, allowing the more power hungry ban ks to be powered down , reducing the total power consumption of the memory.
  • page tables and page directories are forcibly allocated into high priority banks. This is achieved by searching the high priority banks for a sufficiently large continuous block, moving the contiguous block into lower priority memory, then moving the page tables or page directories into the newly freed space in the high priority memory bank. This reduces the likelihood that these structures will need to be moved again when the memory is later defragmented.
  • access to page tables and directories is particularly critical to the operation of the device, reduced movement of these files will result in a higher device performance.
  • Figure 5 shows an example embodiment of the RAM compaction system 50 in which it is broken down into two components, a search system 1 10 and a defragmentation system 120.
  • search system 110 operates by identifying which memory areas can be moved and how to move them.
  • defragmentation system 1 20 operates by d eterm i n i n g wh en a nd how to ru n th e defragmentation on the memory areas identified by defragmentation component 120.
  • separation of the RAM compaction system into the two components advantageously allows the components to be individually configured and individually optimised.
  • the defragmentation mechanism can be replaced without changing the search algorithm. For example to provide a hardware-optimized defragmentation or to allow defragmentation at times other than background idle.
  • the componentisation of the RAM compaction system 50 allows the overall load of compaction to be more easily and effectively distributed, for example across the multiple cores.
  • the search system 1 1 0 may use a fi rst core, wh il st th e defragmentation system may use a second core.
  • search component 110 tags each part of the memory with a flag 810 which indicates whether it is movable or non-movable.
  • the tagging of a part of memory is performed in dependence on the data type stored in each part of the memory and whether it would be suitable for moving.
  • the memory tags are compiled into an ordered list 910 within which fast searching can be performed. This allows fragments which are potentially moveable to be quickly identified by defragmentation system 120.
  • the location of the tagged data in memory or the priority of the memory bank in which the tagged data is stored may be used to determine the list order.
  • the tagging list is ordered by bank priority and then the sequential order of the page within each bank; this allows the defragmentation component 120 to easily ignore banks where there is little or no benefit to be gained from defragmentation.
  • apart from searching new data stored in the memory and updating the ordered list to include the new data no further searching is required.
  • the compilation of the ordered list is performed by search system 110 and subsequent access to the ordered list is provided to defragmentation system 120.
  • the ordered list allows an accelerated defragmentation and compaction as the RAM compaction system may subsequently refer to the index rather than performing a search the physical memory for a movable section of memory.
  • the list is stored as a bitmap so that it can be faster to scan than a full table of searchable memory areas.
  • the search algorithm and defragmentation system may be run asynchronously to each other or synchronously.
  • the simpler example embodiment is synchronous operation in which the defragmentation system is launched for each fragment found by the search algorithm and the search algorithm waits for completion of the defragmentation before searching for the next movable fragment.
  • the memory fragment being moved is locked or tagged so that it cannot change type or status between the search and completion of the defragmentation, so that the conditions on which the decision was made to move this fragment does not become invalid when the defragmentation is performed.
  • the locking or tagging may be performed in various ways depending on the operating system, two possible example ways are to require a mutex to be held to change or move a fragment.
  • An alternative example embodiment is to allow the type to be changed and check at the end of defragmentation whether this has happened and if so discard the copied fragment and abandon the defragmentation of this single fragment.
  • the operation of the asynchronous example embodiment allows the search and defragmentation to run independently, for example as separate threads or on separate cores on a multi-core system.
  • the list of fragments to move which was compiled by the search algorithm may be invalid when the defragmentation system runs because the status of some or all of the fragments has been changed.
  • a possible example implementation is that any operation that can change the type or status of a page, such as a deallocation, reallocation, locking or unlocking, will cause the page to be removed from the list compiled by search algorithm and thus ignored by the defragmentation system.
  • search component 110 may be used by search component 110 to identify areas of memory suitable to be moved.
  • Figure 9 provides a flow diagram of the operation of the example embodiment of figure 8.
  • the search algorithm used by search system 1 10 of the RAM compaction system causes the component to operate according to the following: 1. Search (fig. 9, block 950) the lowest priority memory bank for areas of memory that can be moved into available space in a higher priority memory bank.
  • the highest priority bank is not searched because there is no bank of higher priority that its contents can usefully be moved to - i.e. moving an area of memory out of the highest priority bank will never result in an increase in compaction. This feature is included in the above operations.
  • the search system 1 10 may skip all banks which include immovable areas of memory. This is because it will not be possible to power-down a bank as long as it includes an immovable area and there is therefore little to be gained by otherwise emptying the bank. A better solution may be to increase the priority of such banks, to ensure that they are filled in preference to other banks. If, on the other hand, it is expected that the bank will in the future contain no immovable areas, the bank may be searched (and, optionally, de-prioritised) in order to ensure it is kept as empty as possible so that it can be powered-down as soon as possible after the last immovable area ceases to be.
  • the search algorithm causes search component 110 to do a sweep, searching from to low to high priority memory banks, and resetting to the lowest priority bank once it has reached the highest priority bank.
  • search component 1 10 stores its search position when interrupted so that it can return to that position when the search is resumed.
  • the highest priority target location is found using the kernel allocator's normal page allocation function. If this is not possible, a separate search for a suitable target location can be made.
  • the search algorithm causes search system 110 to repeatedly search the lowest priority banks until they become completely free, before switching to a higher priority set of banks.
  • This example embodiment is shown in figure 9, and figure 10 provides a flow diagram of its operation.
  • the search component 110 of the RAM compaction system 50 to operates according to the following:
  • the search algorithm of this latter example embodiment ensures that the search component 110 will always find any low-priority memory contents that are suitable for moving. In this example embodiment, whilst it may result in the search component 110 repeatedly scanning low-priority banks to pick up only a small amount of movable memory each time (especially if the search is frequently interrupted), this approach has the advantage of keeping the lowest-priority banks as free as possible.
  • the search algorithm causes search component 1 10 to perform one complete full search sweep from the low-priority banks to the high-priority banks first of all, before the search returns to the low-priority bank and proceeds as in either of the above example embodiments.
  • new allocations in the low-priority bank will not be moved until a complete scan has been done, but this is outweighed by the benefit that this algorithm maximises the amount of defragmentation that can be done with a single visit to each bank and so is more efficient overall.
  • the reason for this efficiency is that when a single sweep is used the search system 110 is less likely to become 'stuck' in a loop where a bank is cleared, then memory is allocated within it, then the bank is cleared again, and so on, and the bank oscillates between active and inactive.
  • the search system 1 10 is directed by the device to repeatedly sweep a specific bank with a view to clearing (and therefore powering-down) that particular bank as soon as possible.
  • the search algorithm is configured to predict which sections of memory are likely to become unmovable at a later time and mark these sections as unmovable. For example, memory which can be accessed by physical address or by a virtual mapping of physical RAM, whether it is being currently used or not, might be considered to be an unsuitable candidate for moving. Therefore, this memory can be marked as 'non-movable'.
  • a exemplary computer system comprises a main operating system (OS) with a processing unit and an interrupt handler.
  • OS main operating system
  • processes running on the computer system access the physical memory of a device by means of a virtual address provided by the operating system that maps to the physical address.
  • data fragments may be moved by moving the data fragment from one area of physical memory to another and then, using a memory management unit (MMU) of the computer system's operating system to, altering the virtual memory address for the data fragment to point to the new physical address. This leaves the virtual memory looking unchanged to the processes running on the OS.
  • MMU memory management unit
  • the interrupt handler provides the mechanism for a process running on the processing unit to be interrupted by an interrupt signal from hardware or another software process which indicates the need for attention.
  • a system which incorporates interrupts is known as an interrupt driven system.
  • the interrupt handler features a mechanism which can detect whether an interrupt occurred during any specified time period.
  • MMU Memory Management Unit
  • Another example embodiment of the invention provides an alternative method of moving memory pages that does not depend on MMU locking of the page or have a high overhead if the page is accessed during compaction.
  • a temporary mapping is made between the source and the destination page.
  • the data segment which the RAM compaction system is attempting to move is copied by the RAM compaction system from the source page on the memory storage to the destination page.
  • This copy action is performed with interrupts enabled.
  • interrupts are disabled and the interrupt handler of the operating system is queried to determine if any interrupts occurred during the period of time in which the copy action was being performed of the type that would suggest that the source page might have been modified.
  • this example embodiment if such an interrupt did occur, a 'worst case' scenario in which the source page has been altered is assumed.
  • the destination page is designated as empty and the move is abandoned. Otherwise, in this example embodiment the page mapping is changed to point to the destination page instead of the source page, the source page is designated as empty and the temporary mapping is removed.
  • this method of ensuring data integrity requires minimal overheads and reduces complexity. Compared to an implementation that relies upon the MMU to mark areas of memory as inaccessible whilst it is being moved, this method may have a higher degree of false abortion of moves - but it is simpler to implement and has a lower overhead.
  • This example embodiment is also usable on systems which do not have a full MMU, or can not selectively protect just small regions of memory. Another advantage is that this example embodiment allows memory that is accessed in interrupts to be moved because compaction does not impede the normal operation of other code - this would not be possible with the MMU-based locking scheme because that could result in an interrupt causing an exception which is usually a fatal event and will at least incur a large time overhead on servicing the interrupt.
  • the move is abandoned if an interrupt of any type occurs. This further reduces complexity and therefore some overheads.
  • data fragments may be moved by moving the data fragment from one area of physical memory to another and then using the MMU to alter the virtual memory address for the data fragment to point to the new physical address.
  • MMU MMU
  • the compaction system uses object orientated function calling to allow an object to handle the moving of a page associated to it in the manner most appropriate to the object.
  • each type of data stored in the memory pages has an associated object which contains the methods to handle that type of page with respect to compaction. For example, if the compaction system has moved a memory page which belongs to a first object, it calls a function in the first object to swap the link to the old page for the new one. In this example embodiment, the method in the first object will know exactly how to handle the page type associated with the first object, including any unusual considerations that need to be made when moving such a page.
  • the calling of the appropriate object is only performed by the defragmentation component 120 once the region of memory is successfully moved.
  • the defragmentation component 120 does not need to consider the details of moving specific data types; it simply calls an object to notify the successful move.
  • the 'object' associated with a particular data type is a C++ class.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Power Sources (AREA)
  • Memory System (AREA)

Abstract

A memory resource is divided between multiple memory banks, each of which can be powered-down individually. In order to minimise power consumption by the resource, each bank is assigned a priority and data is compacted into those banks having the highest priorities. As low-priority banks are emptied, they can be powered down. In one arrangement, the banks with the highest power consumption and lowest performance are assigned the lowest priorities.

Description

A Method, Apparatus and Computer Program for Moving Data in
Memory
FIELD OF THE INVENTION
Examples of the present invention relate to a method, apparatus and computer program for compacting a random access memory of a device.
BACKGROUND
Memory which is write accessible may be prone to data fragmentation, which involves data stored in the memory being separated into smaller fragments of data which are distributed throughout the memory storage. Due to fragmentation, the data is less densely concentrated and more spread out across the memory space, resulting in smaller contiguous blocks of free memory. Data which is distributed across a memory space in this way is un- compacted, whereas data which is less fragmented and contiguous may be described as being compacted.
SUMMARY OF EXAMPLES OF THE INVENTION
A first aspect of the invention provides a method comprising: assigning priorities to a plurality of memory banks in a memory resource; identifying at least one data fragment in the memory resource that is suitable for moving between the memory banks; and moving the identified data fragments into a higher priority memory bank, if there is space in the higher priority bank to do so.
A second aspect of the invention provides an apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: assign priorities to a plurality of memory banks in a memory resource; identify at least one data fragment in the memory resource that is suitable for moving between the memory banks; and move the identified data fragments into a higher priority memory bank, if there is space in the higher priority bank to do so.
A third aspect of the invention provides a computer program comprising: code for assigning priorities to a plurality of memory banks in a memory resource; code for identifying at least one data fragment in the memory resource that is suitable for moving between the memory banks; and code for moving the identified data fragments into a higher priority memory bank, if there is space in the higher priority bank to do so.
A fourth aspect of the invention provides a method comprising: identifying data fragments in a memory resource that are suitable for moving; and moving identified data fragments in a manner that compacts data within the memory resou rce, wherein identifying and moving are performed synchronously with one another.
A fifth aspect of the invention provides an apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: identify data fragments in a memory resource that are suitable for moving; and move identified data fragments in a manner that compacts data within the memory resou rce, wherei n identifying and moving are performed synchronously with one another. A sixth aspect of the invention provides a computer program comprising: code for identifying data fragments in a memory resource that are suitable for moving; and code for moving identified data fragments in a manner that compacts data within the memory resource, wherein identifying and moving are performed synchronously with one another.
A seventh aspect of the invention provides a method comprising: identifying data fragments in a memory resource that are suitable for moving; and moving identified data fragments in a manner that compacts data within the memory resou rce, wherein identifying and moving are performed asynchronously with one another.
An eighth aspect of the invention provides an apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: identify data fragments in a memory resource that are suitable for moving; and move identified data fragments in a manner that compacts data within the memory resource, wherein identifying and moving are performed asynchronously with one another.
A ninth aspect of the invention provides a computer program comprising: code for identifying data fragments in a memory resource that are suitable for moving; and code for moving identified data fragments in a manner that compacts data within the memory resource, wherein identifying and moving are performed asynchronously with one another.
A tenth aspect of the invention provides a method comprising: copying a data fragment from a source location to a destination location to initiate a move operation on the data fragment, the source and destination locations being within a memory resource of an interrupt driven processing unit; monitoring whether any interrupts occur during the copying operation which suggest that the data fragment at the source location has been modified during the copy; aborting the move operation if an interrupt indicating modification did occur; and finalising the move operation if an interrupt indicating modification did not occur.
An eleventh aspect of the invention provides an apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: copy a data fragment from a source location to a destination location to initiate a move operation on the data fragment, the source and destination locations being within a memory resource of an interrupt driven processing unit; monitor whether any interrupts occur during the copying operation which suggest that the data fragment at the source location has been modified during the copy; abort the move operation if an interrupt indicating modification did occur; and finalise the move operation if an interrupt indicating modification did not occur.
A twelfth aspect of the invention provides a computer program comprising: code for copying a data fragment from a source location to a destination location to initiate a move operation on the data fragment, the source and destination locations being within a memory resource of an interrupt driven processing unit; code for monitoring whether any interrupts occur during the copying operation which suggest that the data fragment at the source location has been modified during the copy; code for aborting the move operation if an interrupt indicating modification did occur; and code for finalising the move operation if an interrupt indicating modification did not occur.
A thirteenth aspect of the invention provides a method comprising: identifying a type of data comprised in a memory page, copying the memory page from a source location to a destination location to initiate a move operation on the memory page, the source and destination locations being within a memory resource of an object oriented computer system; subsequent to the copying operation, calling a page move handling function of an object of the object oriented computer system, wherein the object used is determined by the identified type of data comprised in the memory page and the page move handling function of the object is configured to perform any remaining operations to move a page of the identified type.
A fourteenth aspect of the invention provides an apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: identify a type of data comprised in a memory page, copy the memory page from a source location to a destination location to initiate a move operation on the memory page, the source and destination locations being within a memory resource of an object oriented computer system, subsequent to the copying operation, call a page move handling function of an object of the object oriented computer system, wherein the object used is determined by the identified type of data comprised in the memory page and the page move handling function of the object is configured to perform any remaining operations to move a page of the identified type.
A fifteenth aspect of the invention provides a computer program comprising: code for identifying a type of data comprised in a memory page, code for copying the memory page from a source location to a destination location to initiate a move operation on the memory page, the source and destination locations being within a memory resource of an object oriented computer system, code for, subsequent to the copying operation, calling a page move handling function of an object of the object oriented computer system, wherein the object used is determined by the identified type of data comprised in the memory page and the page move handling function of the object is configured to perform any remaining operations to move a page of the identified type.
A sixteenth aspect of the invention provides a method comprising: assigning priorities to a plurality of memory banks in a memory resource; identifying at least one data fragment in the memory resource that is suitable for moving between the memory banks. A seventeenth aspect of the invention provides an apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: assign priorities to a plurality of memory banks in a memory resource; identify at least one data fragment in the memory resource that is suitable for moving between the memory banks.
An eighteenth aspect of the invention provides a method comprising: moving data fragments of a memory resource, said data fragments having been identified as suitable for moving between memory banks of the memory resource, at least one memory bank having been assigned a priority, said data fragments being moved into a higher priority memory bank if there is space in the higher priority bank to do so.
A nineteenth aspect of the invention provides an apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: move data fragments of a memory resource, said data fragments having been identified as suitable for moving between memory banks of the memory resource, at least one memory bank having been assigned a priority, said data fragments being moved into a higher priority memory bank if there is space in the higher priority bank to do so.
BRIEF DESCRIPTION OF THE DRAWINGS
A description of example embodiments of the present invention will now be made with reference to the accompanying drawings, wherein: figure 1 illustrates a memory containing some fragmented data; figure 2 illustrates the memory of figure 1 in which the fragmented data has been defragmented; figure 3 illustrates a mobile computing device; figure 4 illustrates a memory containing some fragmented data. The memory has been partitioned into banks of varying priority; figure 5 illustrates an example embodiment of the invention in which the search system and the defragmentation system are separate threads; fig ure 6 i llustrates a memory containing some fragmented data, wherein each data fragment has been tagged by the search algorithm; figure 7 illustrates the memory of figure 6, wherein the tags have been arranged in an ordered list; figure 8 illustrates an example embodiment in which the search algorithm searches the memory banks in order of priority; figure 9 is a flow diagram of the operation of the example embodiment of figure 8; figure 10 illustrates an example embodiment in which the search algorithm searches the banks of a particular priority one or more times before moving onto the next one; and figure 11 is a flow diagram of the operation of the example embodiment of figure 10.
DESCRIPTION OF EXAMPLE EMBODIMENTS
The following description provides various exemplary techniques for performing improved compaction of random access memory (RAM).
In an example embodiment, when a device begins to store data in a memory, the memory is empty and the device can choose to locate the data wherever it likes in the memory. However, as the memory fills up, the choice becomes reduced and the empty spaces in the memory may become too small to hold a single large piece of data. In this example embodiment, the data is then split up into smaller pieces, and the smaller pieces are placed into the free spaces in the memory. In this example embodiment, a record is kept of where each small piece of the data has been stored so that the pieces can be found and put back together when the data is needed by the device. For example, for storing files on a permanent storage medium (e.g. a hard disk), this process allows the hard disk to be used to store data files which are larger than any of the individual unused spaces on the hard disk.
In th is example embodiment, defragmentation techniques operate by rearranging the data on a storage medium so that, as far as possible, all data relating to one process or file is contiguous. In this example embodiment, some free space is required on the storage medium in order to temporarily store data as it is moved during the defragmentation process.
In the example embodiment of Figure 1 , a storage medium is shown containing files A, B, C, D, and E. In this example embodiment, data A and E are fragmented into two pieces each: A1 and A2, and E1 and E2, respectively. An example of a defragmentation algorithm is the following:
1 ) A sweep of the storage memory is performed to establish a map of the stored data and the fragmentations that have occurred.
2) An attempt is made to make the first fragment on the storage medium, fragment A1 , into contiguous data. This is done by first moving any fragments which lie in the space required for contiguous data A. Data B is therefore moved out of the way to a temporary location (often the end of the storage medium).
3) Fragment A2 is moved to the end of fragment A1 to form contiguous data A. The second data set B, is then moved from the temporary location to a position immediately following A. 4) This process continues until all of the fragments have been rearranged to form contiguous data, and all data has been arranged contiguously in the medium.
This end result of this example embodiment is shown in figure 2. In this example embodiment, the contiguous data is arranged at the beginning of the storage medium to minimise future fragmentation. Although figure 2 shows the data arranged contiguously, this is not always possible since certain files may be immovable.
In this example embodiment, defragmentation of a system's storage medium is scheduled by the user of the system to run at regular intervals. In another example embodiment, the defragmentation process may be triggered when the system detects a certain level of fragmentation on the storage medium.
In this example embodiment, for Random Access Memory, an additional advantage of defragmenting and compacting data is to save power. In this example embodiment, large memory arrays are divided into regions called banks. In this example embodiment, each bank can be placed into a low- power data retention mode independently of the other banks. While in this low-power mode the data content is preserved but is not accessible. In this example embodiment, by compacting the data into the smallest possible number of active banks the number of banks which can be switched to low- power mode is maxim ized , and therefore the power consumption is minimised.
In an example embodiment, a mobile computing device uses system idle time to run RAM defragmentation processes. Similar to file system fragmentation, random access memory containing active processes and working data can become fragmented as data is frequently swapped in and out of RAM. One advantage of this exemplary method of handling defragmentation is that the defragmentation is only performed whilst the device is idle, which prevents the defragmentation from having any visible effect on the performance of the device.
However, in some other example embodiments, it is desirable to make the defragmentation higher priority than background, for example to compact memory before switching the entire device to low-power standby mode in order to obtain greatest power saving . Therefore, i n such exam ple embodiments, it is desirable that the defragmentation algorithm can be configured for different situations.
The example embodiment of Figure 3 shows a mobile computing device 100 with a central processing unit (CPU) 10, random access memory 30, and permanent storage medium 40. In this example embodiment, the CPU, RAM, and permanent storage medium are in communication with each other by system bus 20. In this example embodiment, RAM compaction system 50 is a processor thread running on CPU 10 and has high level access to RAM 30. In some other example embodiments, the RAM compaction system may alternatively be run on a secondary processor or be implemented as a hardware layer between the CPU and the RAM.
Memory Bank Priohtisation
In an example embodiment of the invention shown in figure 4, the RAM 30 is spread across multiple memory banks.
In this example embodiment, each memory bank is assigned a priority. The priority of the bank indicates the importance of keeping the bank powered up and in use. In this example embodiment, lower priority memory banks are the memory banks which the system prefers to use the least, and so will work hardest to empty. The priority of a memory bank may depend, for example, on the energy consumption used by the memory bank or the type of technology used in the memory bank. For example, if the energy consumed by the particular memory bank to retain data is relatively high, it may be assigned a low priority. In this example embodiment, if read and/or write access to the memory bank is relatively slow, it may again be assigned a low priority.
In this example embodiment, for simplicity, figure 4 shows just three memory banks: a low priority memory bank, a medium priority memory bank and a high priority memory bank. However, in some other example embodiments any number of memory banks can be used. Such example embodiments are not limited to just three discrete levels of priority. For example, other numbers of priority levels could be used. In some other example embodiments, every memory bank is assigned a different priority level (possibly arbitrarily, in the case where two banks are in reality equally important); the unique priority levels simpl ifying some implementations. In some other example embodiments, more than one bank may share the same priority level. In some examples, it is not necessary that all memory banks within a memory resource are assigned a priority.
In this example embodiment, if the storage of information in memory is based upon the priority of the available memory banks, with the information stored preferably in high-priority memory banks, low priority memory banks will often be empty and can therefore be powered-off. In this example embodiment, the operating system memory allocation algorithm will attempt to satisfy memory requests from the highest possible priority bank, moving to lower-priority banks until the total required amount of memory has been allocated. In this example embodiment, as the memory becomes fragmented it will become more difficult to satisfy memory requests from high priority banks.
In this example embodiment, defragmentation will rectify this by sorting the information in memory, based on bank priority. In this example embodiment, where the most power hungry memory banks are assigned a low priority rating, memory will be moved out of the low priority/power hungry memory banks into high priority/low power usage memory banks, allowing the more power hungry ban ks to be powered down , reducing the total power consumption of the memory.
In this example embodiment, page tables and page directories, or equivalent structures used by the system Memory Management Unit, are forcibly allocated into high priority banks. This is achieved by searching the high priority banks for a sufficiently large continuous block, moving the contiguous block into lower priority memory, then moving the page tables or page directories into the newly freed space in the high priority memory bank. This reduces the likelihood that these structures will need to be moved again when the memory is later defragmented. In this example embodiment, as access to page tables and directories is particularly critical to the operation of the device, reduced movement of these files will result in a higher device performance.
Separate search system and defraqmentation system
Figure 5 shows an example embodiment of the RAM compaction system 50 in which it is broken down into two components, a search system 1 10 and a defragmentation system 120. In this example embodiment, each of these two components can be individually configured. In this example embodiment, search system 110 operates by identifying which memory areas can be moved and how to move them. In this example embodiment, defragmentation system 1 20 operates by d eterm i n i n g wh en a nd how to ru n th e defragmentation on the memory areas identified by defragmentation component 120. In this example embodiment, separation of the RAM compaction system into the two components advantageously allows the components to be individually configured and individually optimised. For example, if a newer, more efficient search system is developed, it can be integrated into the RAM compaction system 50 without rebuilding the existing defragmentation system. In this example embodiment, the defragmentation system need not even be stopped whilst the new search system is introduced. Likewise, in this example embodiment, the defragmentation mechanism can be replaced without changing the search algorithm. For example to provide a hardware-optimized defragmentation or to allow defragmentation at times other than background idle.
Furthermore, in this example embodiment, for a parallel processing machine such as a device using a multi-core CPU, the componentisation of the RAM compaction system 50 allows the overall load of compaction to be more easily and effectively distributed, for example across the multiple cores. For exam ple, the search system 1 1 0 may use a fi rst core, wh il st th e defragmentation system may use a second core.
Fast searching using memory tagging and indexing
An example embodiment of the invention is shown in figure 6. In this example embodiment, using a defined search algorithm, search component 110 tags each part of the memory with a flag 810 which indicates whether it is movable or non-movable. In this example embodiment, the tagging of a part of memory is performed in dependence on the data type stored in each part of the memory and whether it would be suitable for moving.
In this example embodiment, as shown in figure 7, once the memory has been tagged, the memory tags are compiled into an ordered list 910 within which fast searching can be performed. This allows fragments which are potentially moveable to be quickly identified by defragmentation system 120. In this example embodiment, the location of the tagged data in memory or the priority of the memory bank in which the tagged data is stored may be used to determine the list order. In this example embodiment, the tagging list is ordered by bank priority and then the sequential order of the page within each bank; this allows the defragmentation component 120 to easily ignore banks where there is little or no benefit to be gained from defragmentation. Furthermore, in this example embodiment, apart from searching new data stored in the memory and updating the ordered list to include the new data, no further searching is required.
In this example embodiment, the compilation of the ordered list is performed by search system 110 and subsequent access to the ordered list is provided to defragmentation system 120.
In this example embodiment, the ordered list allows an accelerated defragmentation and compaction as the RAM compaction system may subsequently refer to the index rather than performing a search the physical memory for a movable section of memory.
In this example embodiment, the list is stored as a bitmap so that it can be faster to scan than a full table of searchable memory areas.
In example embodiments, the search algorithm and defragmentation system may be run asynchronously to each other or synchronously. The simpler example embodiment is synchronous operation in which the defragmentation system is launched for each fragment found by the search algorithm and the search algorithm waits for completion of the defragmentation before searching for the next movable fragment. In this example embodiment, the memory fragment being moved is locked or tagged so that it cannot change type or status between the search and completion of the defragmentation, so that the conditions on which the decision was made to move this fragment does not become invalid when the defragmentation is performed. In this example embodiment, the locking or tagging may be performed in various ways depending on the operating system, two possible example ways are to require a mutex to be held to change or move a fragment. An alternative example embodiment is to allow the type to be changed and check at the end of defragmentation whether this has happened and if so discard the copied fragment and abandon the defragmentation of this single fragment.
The operation of the asynchronous example embodiment allows the search and defragmentation to run independently, for example as separate threads or on separate cores on a multi-core system. In this example embodiment, the list of fragments to move which was compiled by the search algorithm may be invalid when the defragmentation system runs because the status of some or all of the fragments has been changed. This can be avoided in a number of ways, a possible example implementation is that any operation that can change the type or status of a page, such as a deallocation, reallocation, locking or unlocking, will cause the page to be removed from the list compiled by search algorithm and thus ignored by the defragmentation system.
Search Algorithms
In example embodiments different search algorithms may be used by search component 110 to identify areas of memory suitable to be moved.
An example embodiment of the invention is shown in figure 8. Figure 9 provides a flow diagram of the operation of the example embodiment of figure 8. In the example embodiment, the search algorithm used by search system 1 10 of the RAM compaction system causes the component to operate according to the following: 1. Search (fig. 9, block 950) the lowest priority memory bank for areas of memory that can be moved into available space in a higher priority memory bank.
2. Once an area of memory is identified (block 952) as suitable to be moved, the piece of memory is tagged (block 952) to be moved and the search is continued from that point.
3. Once the bank has been fully searched (block 954) and no new memory has been identified as being suitable to be moved, the search moves on to the bank with the next lowest priority (blocks 956 and 958). This continues until the bank with the highest priority is reached.
4. When the bank with the highest priority is reached, the search is reset to the bank with the lowest priority (blocks 956 and 950).
In another example embodiment, the highest priority bank is not searched because there is no bank of higher priority that its contents can usefully be moved to - i.e. moving an area of memory out of the highest priority bank will never result in an increase in compaction. This feature is included in the above operations.
In another example embodiment, the search system 1 10 may skip all banks which include immovable areas of memory. This is because it will not be possible to power-down a bank as long as it includes an immovable area and there is therefore little to be gained by otherwise emptying the bank. A better solution may be to increase the priority of such banks, to ensure that they are filled in preference to other banks. If, on the other hand, it is expected that the bank will in the future contain no immovable areas, the bank may be searched (and, optionally, de-prioritised) in order to ensure it is kept as empty as possible so that it can be powered-down as soon as possible after the last immovable area ceases to be. In this example embodiment, the search algorithm causes search component 110 to do a sweep, searching from to low to high priority memory banks, and resetting to the lowest priority bank once it has reached the highest priority bank. In this example embodiment, search component 1 10 stores its search position when interrupted so that it can return to that position when the search is resumed.
In this example embodiment, once an area of memory has been identified as moveable, it is desirable to move that area to as high priority a bank as possible. Therefore, only banks with a higher priority than the source bank should be considered. In this example embodiment, the highest priority target location is found using the kernel allocator's normal page allocation function. If this is not possible, a separate search for a suitable target location can be made.
In an alternative example embodiment, the search algorithm causes search system 110 to repeatedly search the lowest priority banks until they become completely free, before switching to a higher priority set of banks. This example embodiment is shown in figure 9, and figure 10 provides a flow diagram of its operation. In this example embodiment, the search component 110 of the RAM compaction system 50 to operates according to the following:
1. Search (fig. 11 , block 960) the lowest priority memory bank for areas of memory that can be moved into available space in a higher priority memory bank.
2. Once an area of memory is identified (block 962) as suitable to be moved, the piece of memory is tagged (block 962) to be moved and the search is reset to the beginning of the lowest priority bank (block 964).
3. Once a bank has been fully searched (block 966) and no new memory has been identified as being suitable to be moved, the search moves on to the bank with the next lowest priority (blocks 968 and 970). This continues until the bank with the highest priority is reached. 4. When the highest priority bank is reached, the search is reset to the lowest priority bank (blocks 968 and 960).
As previously, these operations can represent an example embodiment in which the highest priority bank is never searched.
The search algorithm of this latter example embodiment ensures that the search component 110 will always find any low-priority memory contents that are suitable for moving. In this example embodiment, whilst it may result in the search component 110 repeatedly scanning low-priority banks to pick up only a small amount of movable memory each time (especially if the search is frequently interrupted), this approach has the advantage of keeping the lowest-priority banks as free as possible.
In another example embodiment of the invention, the search algorithm causes search component 1 10 to perform one complete full search sweep from the low-priority banks to the high-priority banks first of all, before the search returns to the low-priority bank and proceeds as in either of the above example embodiments. In this example embodiment, new allocations in the low-priority bank will not be moved until a complete scan has been done, but this is outweighed by the benefit that this algorithm maximises the amount of defragmentation that can be done with a single visit to each bank and so is more efficient overall. The reason for this efficiency is that when a single sweep is used the search system 110 is less likely to become 'stuck' in a loop where a bank is cleared, then memory is allocated within it, then the bank is cleared again, and so on, and the bank oscillates between active and inactive.
The above example embodiments represent exemplary behaviours for the search system 1 10, but it will be appreciated that in some other example embodiments other behaviours are possible and the choice of a particular behaviour will vary from application to application. For example, in one example embodiment, the search system 1 10 is directed by the device to repeatedly sweep a specific bank with a view to clearing (and therefore powering-down) that particular bank as soon as possible.
In one example embodiment of the invention, the search algorithm is configured to predict which sections of memory are likely to become unmovable at a later time and mark these sections as unmovable. For example, memory which can be accessed by physical address or by a virtual mapping of physical RAM, whether it is being currently used or not, might be considered to be an unsuitable candidate for moving. Therefore, this memory can be marked as 'non-movable'.
Safe interrupt handling
In an example embodiment, a exemplary computer system comprises a main operating system (OS) with a processing unit and an interrupt handler. In this example embodiment, processes running on the computer system access the physical memory of a device by means of a virtual address provided by the operating system that maps to the physical address. In this example embodiment, data fragments may be moved by moving the data fragment from one area of physical memory to another and then, using a memory management unit (MMU) of the computer system's operating system to, altering the virtual memory address for the data fragment to point to the new physical address. This leaves the virtual memory looking unchanged to the processes running on the OS.
In this example embodiment, the interrupt handler provides the mechanism for a process running on the processing unit to be interrupted by an interrupt signal from hardware or another software process which indicates the need for attention. A system which incorporates interrupts is known as an interrupt driven system. In this example embodiment, the interrupt handler features a mechanism which can detect whether an interrupt occurred during any specified time period.
In this example embodiment, when a fragment is being moved, there must be some way to prevent the data within it changing during the move otherwise the resulting copy could contain a mix of old and new data. One example method, where the hardware supports it, is to use the Memory Management Unit (MMU) to mark the page in such a way that an access to it from any code other than the compaction code will result in a CPU exception or interrupt which can be used to abort the copy. The advantage of this exemplary implementation is that the move is only aborted if the source page was genuinely accessed.
Another example embodiment of the invention provides an alternative method of moving memory pages that does not depend on MMU locking of the page or have a high overhead if the page is accessed during compaction. In this example embodiment, when moving pages of memory, a temporary mapping is made between the source and the destination page. In this example embodiment, the data segment which the RAM compaction system is attempting to move is copied by the RAM compaction system from the source page on the memory storage to the destination page. This copy action is performed with interrupts enabled. In this example embodiment, once the copy action is complete, interrupts are disabled and the interrupt handler of the operating system is queried to determine if any interrupts occurred during the period of time in which the copy action was being performed of the type that would suggest that the source page might have been modified. In this example embodiment, if such an interrupt did occur, a 'worst case' scenario in which the source page has been altered is assumed. In this example embodiment, the destination page is designated as empty and the move is abandoned. Otherwise, in this example embodiment the page mapping is changed to point to the destination page instead of the source page, the source page is designated as empty and the temporary mapping is removed. In this example embodiment, this method of ensuring data integrity requires minimal overheads and reduces complexity. Compared to an implementation that relies upon the MMU to mark areas of memory as inaccessible whilst it is being moved, this method may have a higher degree of false abortion of moves - but it is simpler to implement and has a lower overhead. This example embodiment is also usable on systems which do not have a full MMU, or can not selectively protect just small regions of memory. Another advantage is that this example embodiment allows memory that is accessed in interrupts to be moved because compaction does not impede the normal operation of other code - this would not be possible with the MMU-based locking scheme because that could result in an interrupt causing an exception which is usually a fatal event and will at least incur a large time overhead on servicing the interrupt.
In an alternative example embodiment, the move is abandoned if an interrupt of any type occurs. This further reduces complexity and therefore some overheads.
Object orientated memory management
In the example embodiment above, data fragments may be moved by moving the data fragment from one area of physical memory to another and then using the MMU to alter the virtual memory address for the data fragment to point to the new physical address. In some example embodiments, there may be other consequences of moving physical memory around which need to be addressed. For example, consider demand paging schemes, in which the system only copies the memory page into main memory if a process attempts to access it and may need to keep a list of the actual physical pages being paged in and out. In one embodiment of the invention, the compaction system uses object orientated function calling to allow an object to handle the moving of a page associated to it in the manner most appropriate to the object. In this example embodiment, each type of data stored in the memory pages has an associated object which contains the methods to handle that type of page with respect to compaction. For example, if the compaction system has moved a memory page which belongs to a first object, it calls a function in the first object to swap the link to the old page for the new one. In this example embodiment, the method in the first object will know exactly how to handle the page type associated with the first object, including any unusual considerations that need to be made when moving such a page.
In an example embodiment, the calling of the appropriate object is only performed by the defragmentation component 120 once the region of memory is successfully moved. In this example embodiment, the defragmentation component 120 does not need to consider the details of moving specific data types; it simply calls an object to notify the successful move.
In an example embodiment, the 'object' associated with a particular data type is a C++ class.
The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein, and without limitation to the scope of the claims. The applicant indicates that aspects of th e present invention may consist of any such individual feature or combination of features. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention.

Claims

1. A method comprising: assigning priorities to a plurality of memory banks in a memory resource; identifying at least one data fragment in the memory resource that is suitable for moving between the memory banks; and moving the identified data fragments into a higher priority memory bank, if there is space in the higher priority bank to do so.
2. The method of claim 1 , comprising powering-down a memory bank once it has been emptied.
3. The method of any preceding claim, wherein the priority of a bank is dependent upon its power consumption.
4. The method of any preceding claim, wherein the priority of a bank is dependent upon at least one of the read and write access speeds of the bank.
5. The method of any preceding claim, wherein the priority of a bank is dependent upon the moveability of data fragments already stored in the bank.
6. The method of any preceding claim, wherein identifying at least one data fragment in the memory resource suitable for moving comprises searching the memory banks of the memory resource in an order determined by the priority associated with each bank.
7. The method of any preceding claim, wherein identifying at least one data fragment in the memory resource suitable for moving comprises searching the memory banks in order of increasing priority.
8. The method of any preceding claim, wherein the highest priority memory bank is not searched for movable data fragments.
9. The method of any preceding claim, wherein any memory bank already known to contain immovable data fragments is not searched for movable data fragments.
10. The method of any preceding claim, wherein a data fragment is suitable for moving if it is currently moveable and is expected to remain moveable.
11. The method of any preceding claim, wherein a data fragment is suitable for moving if it cannot be accessed using a physical address.
12. An apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: assign priorities to a plurality of memory banks in a memory resource; identify at least one data fragment in the memory resource that is suitable for moving between the memory banks; and move the identified data fragments into a higher priority memory bank, if there is space in the higher priority bank to do so.
13. The apparatus of claim 12, configured to power-down a memory bank once it has been emptied.
14. The apparatus of claims 12 or 13, wherein the priority of a bank is dependent upon its power consumption.
15. The apparatus of any one of claims 12 to 14, wherein the priority of a bank is dependent upon at least one of the read and write access speeds of the bank.
16. The apparatus of any one of claims 12 to 15, wherein the priority of a bank is dependent upon the moveability of data fragments already stored in the bank.
17. The apparatus of any one of claims 12 to 16, wherein to identify at least one data fragment in the memory resource suitable for moving the apparatus searches the memory banks of the memory resource in an order determined by the priority associated with each bank.
18. The apparatus of any one of claims 12 to 17, wherein to identify at least one data fragment in the memory resource suitable for moving the apparatus searches the memory banks in order of increasing priority.
19. The apparatus of any one of claims 12 to 18, wherein the highest priority memory bank is not searched for movable data fragments.
20. The apparatus of any one of claims 12 to 19, wherein any memory bank already known to contain immovable data fragments is not searched for movable data fragments.
21. The apparatus of any one of claims 12 to 20, wherein a data fragment is suitable for moving if it is currently moveable and is expected to remain moveable.
22. The apparatus of any one of claims 12 to 21 , wherein a data fragment is suitable for moving if it cannot be accessed using a physical address.
23. The apparatus of any one of claims 12 to 22, wherein the at least one memory comprises the memory resource.
24. A computer program comprising: code for assigning priorities to a plurality of memory banks in a memory resource; code for identifying at least one data fragment in the memory resource that is suitable for moving between the memory banks; and code for moving the identified data fragments into a higher priority memory bank, if there is space in the higher priority bank to do so.
25. A computer-readable medium bearing a computer program according to claim 24.
26. A method comprising: identifying data fragments in a memory resource that are suitable for moving; and moving identified data fragments in a manner that compacts data within the memory resource, wherein identifying and moving are performed synchronously with one another.
27. Th e method of cl a i m 26, wherein compacting data comprises defragmenting the data.
28. The method of claim 26 or claim 27, further comprising the method of any of claims 1 to 11.
29. An apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: identify data fragments in a memory resource that are suitable for moving; and move identified data fragments in a manner that compacts data within the memory resource, wherein identifying and moving are performed synchronously with one another.
30. The apparatus of claim 29, wherein compacting data comprises defragmenting the data.
31. The apparatus of claim 29 or claim 30, further comprising the features of any of claims 12 to 23.
32. The apparatus of any of claims 29 to 31 , wherein the at least one memory comprises the memory resource.
33. A computer program comprising: code for identifying data fragments in a memory resource that are suitable for moving; and code for moving identified data fragments in a manner that compacts data within the memory resource, wherein identifying and moving are performed synchronously with one another.
34. A computer-readable medium bearing a computer program according to claim 33.
35. A method comprising: identifying data fragments in a memory resource that are suitable for moving; and moving identified data fragments in a manner that compacts data within the memory resource, wherein identifying and moving are performed asynchronously with one another.
36. The method of claim 35, further including: detecting a change in the type or state of a page of data; and preventing an identified page of data from being moved in response to the detection.
37. The method of claim 35 or claim 36, wherein compacting data comprises defragmenting the data.
38. The method of any of claims 35 to 37, further comprising the method of any of claims 1 to 11.
39. The method of any of claims 35 to 38, wherein the action of changing the type or status of a page of identified data prevents that page from being moved.
40. An apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: identify data fragments in a memory resource that are suitable for moving; and move identified data fragments in a manner that compacts data within the memory resource, wherein identifying and moving are performed asynchronously with one another.
41. The apparatus of claim 40, caused to additionally perform the following: detect a change in the type or state of a page of data; and prevent an identified page of data from being moved in response to the detection.
42. The apparatus of claim 40 or claim 41 , wherein compacting data comprises defragmenting the data.
43. The apparatus of any of claims 40 to 42, further comprising the features of any of claims 12 to 23.
44. The apparatus of any of claims 40 to 43, wherein the action of changing the type or status of a page of identified data prevents that page from being moved.
45. The apparatus of any of claims 40 to 44, wherein the at least one memory comprises the memory resource.
46. A computer program comprising: code for identifying data fragments in a memory resource that are suitable for moving; and code for moving identified data fragments in a manner that compacts data within the memory resource, wherein identifying and moving are performed asynchronously with one another.
47. A computer-readable medium bearing a computer program according to claim 46.
48. A method comprising: copying a data fragment from a source location to a destination location to initiate a move operation on the data fragment, the source and destination locations being within a memory resource of an interrupt driven processing unit; monitoring whether any interrupts occur during the copying operation which suggest that the data fragment at the source location has been modified during the copy; aborting the move operation if an interrupt indicating modification did occur; and finalising the move operation if an interrupt indicating modification did not occur.
49. The method of claim 48, wherein: aborting the move operating comprises indicating that the memory at the destination location is unused; and finalising the move operation comprises indicating that the data fragment can be found at the destination location and that the source destination is unused.
50. The method of any of claims 1 -11 , wherein moving the identified data fragments is performed according to any of claims 48-49.
51. An apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: copy a data fragment from a source location to a destination location to initiate a move operation on the data fragment, the source and destination locations being within a memory resource of an interrupt driven processing unit; monitor whether any interrupts occur during the copying operation which suggest that the data fragment at the source location has been modified during the copy; abort the move operation if an interrupt indicating modification did occur; and finalise the move operation if an interrupt indicating modification did not occur.
52. The apparatus of claim 51 , wherein: aborting the move operation comprises indicating that the memory at the destination location is unused; and finalising the move operation comprises indicating that the data fragment can be found at the destination location and that the source destination is unused.
53. The apparatus of any of claims 12-23, further comprising the features of any of claims 51 -52 to move the identified data fragments.
54. The apparatus of any of claims 51 to 53, wherein the at least one memory comprises the memory resource.
55. The apparatus of any of claims 51 to 54, wherein the at least one processor comprises the interrupt driven processing unit.
56. A computer program comprising: code for copying a data fragment from a source location to a destination location to initiate a move operation on the data fragment, the source and destination locations being within a memory resource of an interrupt driven processing unit; code for monitoring whether any interrupts occur during the copying operation which suggest that the data fragment at the source location has been modified during the copy; code for aborting the move operation if an interrupt indicating modification did occur; and code for finalising the move operation if an interrupt indicating modification did not occur.
57. A computer-readable medium bearing a computer program according to claim 56.
58. A method comprising: identifying a type of data comprised in a memory page, copying the memory page from a source location to a destination location to initiate a move operation on the memory page, the source and destination locations being within a memory resource of an object oriented computer system, subsequent to the copying operation, calling a page move handling function of an object of the object oriented computer system, wherein the object used is determined by the identified type of data comprised in the memory page and the page move handling function of the object is configured to perform any remaining operations to move a page of the identified type.
59. The method of claim 58 wherein the object is a C++ class.
60. An apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: identify a type of data comprised in a memory page, copy the memory page from a source location to a destination location to initiate a move operation on the memory page, the source and destination locations being within a memory resource of an object oriented computer system, subsequent to the copying operation, call a page move handling function of an object of the object oriented computer system, wherein the object used is determined by the identified type of data comprised in the memory page and the page move handling function of the object is configured to perform any remaining operations to move a page of the identified type.
61. The apparatus of claim 60, wherein the object is a C++ class.
62. The apparatus of claim 60 or claim 61 , wherein the apparatus comprises the object oriented computer system.
63. The apparatus of any of claims 60 to 62, wherein the at least one memory comprises the memory resource.
64. A computer program comprising: code for identifying a type of data comprised in a memory page, code for copying the memory page from a source location to a destination location to initiate a move operation on the memory page, the source and destination locations being within a memory resource of an object oriented computer system, code for, subsequent to the copying operation, calling a page move handling function of an object of the object oriented computer system, wherein the object used is determined by the identified type of data comprised in the memory page and the page move handling function of the object is configured to perform any remaining operations to move a page of the identified type.
65. A computer-readable medium bearing a computer program according to claim 64.
66. A method comprising: assigning priorities to a plural ity of memory banks in a memory resource; identifying at least one data fragment in the memory resource that is suitable for moving between the memory banks.
67. An apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: assign priorities to a plurality of memory banks in a memory resource; identify at least one data fragment in the memory resource that are suitable for moving between the memory banks.
68. A method comprising: moving data fragments of a memory resource, said data fragments having been identified as suitable for moving between memory banks of the memory resource, at least one memory bank having been assigned a priority, said data fragments being moved into a higher priority memory bank if there is space in the higher priority bank to do so.
69. An apparatus comprising: at least one processor; and at least one memory including computer program code the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: move data fragments of a memory resource, said data fragments having been identified as suitable for moving between memory banks of the memory resource, at least one memory bank having been assigned a priority, said data fragments being moved into a higher priority memory bank if there is space in the higher priority bank to do so.
PCT/IB2009/055574 2008-12-17 2009-12-08 A method, apparatus and computer program for moving data in memory WO2010070529A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0823041.9 2008-12-17
GB0823041A GB2466264A (en) 2008-12-17 2008-12-17 Memory defragmentation and compaction into high priority memory banks

Publications (2)

Publication Number Publication Date
WO2010070529A2 true WO2010070529A2 (en) 2010-06-24
WO2010070529A3 WO2010070529A3 (en) 2010-08-12

Family

ID=40343767

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2009/055574 WO2010070529A2 (en) 2008-12-17 2009-12-08 A method, apparatus and computer program for moving data in memory

Country Status (2)

Country Link
GB (1) GB2466264A (en)
WO (1) WO2010070529A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015135506A1 (en) * 2014-03-13 2015-09-17 Mediatek Inc. Method for controlling memory device to achieve more power saving and related apparatus thereof
US10002072B2 (en) 2015-05-18 2018-06-19 Mediatek Inc. Method and apparatus for controlling data migration in multi-channel memory device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120124269A1 (en) * 2010-11-15 2012-05-17 International Business Machines Corporation Organizing Memory for Effective Memory Power Management
US8819379B2 (en) 2011-11-15 2014-08-26 Memory Technologies Llc Allocating memory based on performance ranking
CN107193753B (en) * 2017-06-16 2020-08-04 深圳市万普拉斯科技有限公司 Memory reforming method and device, electronic equipment and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2406668A (en) * 2003-10-04 2005-04-06 Symbian Ltd Memory management in a computing device
WO2007072435A2 (en) * 2005-12-21 2007-06-28 Nxp B.V. Reducingthe number of memory banks being powered
US20080005516A1 (en) * 2006-06-30 2008-01-03 Meinschein Robert J Memory power management through high-speed intra-memory data transfer and dynamic memory address remapping

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4972845B2 (en) * 2001-09-27 2012-07-11 富士通株式会社 Storage system
GB0400661D0 (en) * 2004-01-13 2004-02-11 Koninkl Philips Electronics Nv Memory management method and related system
JP4349274B2 (en) * 2004-12-20 2009-10-21 日本電気株式会社 Magnetic disk drive and control method
GB2426360A (en) * 2005-05-18 2006-11-22 Symbian Software Ltd Reorganisation of memory for conserving power in a computing device
JP2007128126A (en) * 2005-11-01 2007-05-24 Matsushita Electric Ind Co Ltd Information processor
JP2007164650A (en) * 2005-12-16 2007-06-28 Hitachi Ltd Storage control device and control method of same
US20070180187A1 (en) * 2006-02-01 2007-08-02 Keith Olson Reducing power consumption by disabling refresh of unused portions of DRAM during periods of device inactivity

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2406668A (en) * 2003-10-04 2005-04-06 Symbian Ltd Memory management in a computing device
WO2007072435A2 (en) * 2005-12-21 2007-06-28 Nxp B.V. Reducingthe number of memory banks being powered
US20080005516A1 (en) * 2006-06-30 2008-01-03 Meinschein Robert J Memory power management through high-speed intra-memory data transfer and dynamic memory address remapping

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015135506A1 (en) * 2014-03-13 2015-09-17 Mediatek Inc. Method for controlling memory device to achieve more power saving and related apparatus thereof
US10002072B2 (en) 2015-05-18 2018-06-19 Mediatek Inc. Method and apparatus for controlling data migration in multi-channel memory device

Also Published As

Publication number Publication date
GB2466264A (en) 2010-06-23
WO2010070529A3 (en) 2010-08-12
GB0823041D0 (en) 2009-01-28

Similar Documents

Publication Publication Date Title
JP5466568B2 (en) Resource management method, resource management program, and resource management apparatus
Lee et al. LAST: locality-aware sector translation for NAND flash memory-based storage systems
US9430402B2 (en) System and method for providing stealth memory
KR100390616B1 (en) System and method for persistent and robust storage allocation
JP4281421B2 (en) Information processing system, control method therefor, and computer program
CN100458738C (en) Method and system for management of page replacement
US9355023B2 (en) Virtual address pager and method for use with a bulk erase memory
EP0230354B1 (en) Enhanced handling of large virtual storage extents
US20120198141A1 (en) Integrating data from symmetric and asymmetric memory
TW201301030A (en) Fast translation indicator to reduce secondary address table checks in a memory device
KR20140102679A (en) Working set swapping using a sequentially ordered swap file
CN109857677B (en) Distribution method and device of kernel stack
US7395285B2 (en) Garbage collection system
KR20120058352A (en) Hybrid Memory System and Management Method there-of
WO2015053966A1 (en) A memory system with shared file system
CN101645045A (en) Memory management using transparent page transformation
US8930732B2 (en) Fast speed computer system power-on and power-off method
US20070294550A1 (en) Memory Management With Defragmentation In A Computing Device
US9483400B2 (en) Multiplexed memory for segments and pages
WO2010070529A2 (en) A method, apparatus and computer program for moving data in memory
KR101392062B1 (en) Fast speed computer system power-on & power-off method
Lee et al. Waltz: Leveraging zone append to tighten the tail latency of lsm tree on zns ssd
Venkatesan et al. Ex-tmem: Extending transcendent memory with non-volatile memory for virtual machines
KR20170122090A (en) Garbage collection method for performing memory controller of storage device and memory controler
JP5334048B2 (en) Memory device and computer

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09833041

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09833041

Country of ref document: EP

Kind code of ref document: A2