GB2466264A - Memory defragmentation and compaction into high priority memory banks - Google Patents

Memory defragmentation and compaction into high priority memory banks Download PDF

Info

Publication number
GB2466264A
GB2466264A GB0823041A GB0823041A GB2466264A GB 2466264 A GB2466264 A GB 2466264A GB 0823041 A GB0823041 A GB 0823041A GB 0823041 A GB0823041 A GB 0823041A GB 2466264 A GB2466264 A GB 2466264A
Authority
GB
United Kingdom
Prior art keywords
data
memory
page
moving
bank
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0823041A
Other versions
GB0823041D0 (en
Inventor
Richard Fitzgerald
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Oyj
Original Assignee
Nokia Oyj
Symbian Software Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Oyj, Symbian Software Ltd filed Critical Nokia Oyj
Priority to GB0823041A priority Critical patent/GB2466264A/en
Publication of GB0823041D0 publication Critical patent/GB0823041D0/en
Priority to PCT/IB2009/055574 priority patent/WO2010070529A2/en
Publication of GB2466264A publication Critical patent/GB2466264A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1028Power efficiency
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The memory in a computer system is divided into several memory banks. Each bank has a priority. Data fragments are moved from low priority banks to high priority banks. If a bank is emptied, it may be powered down. Searching for data to move may be performed synchronously or asynchronously with respect to moving the data. If an interrupt occurs while data is being moved, the move may be abandoned if the interrupt service routine may have altered the data being moved. A page move function may be used to configure objects in a page which is moved. The priorities of banks may be dependent on the speed or power consumption of the banks. It may also be dependent on whether the bank holds data which cannot be moved.

Description

RAM Compaction
FIELD OF THE INVENTION
The present invention relates to the process of compacting a random access memory of a device.
BACKGROUND OF THE INVENTION
Modern computing devices typically include a microprocessor, random access memory (RAM) and a media device such as a ROM memory, flash memory, optical storage or magnetic storage. Memory storage which is write accessible is often prone to data fragmentation. This is where the data stored on the memory storage is broken down into small fragments of data which are distributed throughout the memory storage. As a consequence of the fragmentation of the data, the data is less densely concentrated and more spread out across the memory space, resulting in smaller contiguous blocks of free memory. Data which is distributed across a memory space in this way is un-compacted, whereas data which is unfragmented and contiguous is described as being compacted.
When a device begins to store data in a memory, the memory is empty and the device can choose to locate the data wherever it likes in the memory.
However, as the memory fills up, the choice becomes reduced and the empty spaces in the memory may become too small to hold a single large piece of data. The data is then split up into smaller pieces, and the smaller pieces are placed into the free spaces in the memory. A record is kept of where each small piece of the data has been stored so that the pieces can be found and put back together when the data is needed by the device. For example, for storing files on a permanent storage medium (e.g. a hard disk), this process allows the hard disk to be used to store data files which are larger than any of the individual unused spaces on the hard disk.
Defragmentation techniques usually operate by rearranging the data on a storage medium so that, as far as possible, all data relating to one process or file is contiguous. Normally, some free space is required on the storage medium during in order to temporarily store data as it is moved during the defragmentation process.
Figure 1 shows a storage medium containing files A, B, C, 0, and E. Data A and E are fragmented into two pieces each: Al and A2, and El and E2, respectively. An example of a conventional defragmentation algorithm is the following: 1) A sweep of the storage memory is performed to establish a map of the stored data and the fragmentations that have occurred.
2) An attempt is made to make the first fragment on the storage medium, fragment Al, into contiguous data. This is done by first moving any fragments which lie in the space required for contiguous data A. Data B is therefore moved out of the way to a temporary location (often the end of the storage medium).
3) Fragment A2 is moved to the end of fragment Al to form contiguous data A. The second data set B, is then moved from the temporary location to a position immediately following A. 4) This process continues until all of the fragments have been rearranged to form contiguous data, and all data has been arranged contiguously in the medium.
This end result is shown in figure 2. The contiguous data is often arranged at the beginning of the storage medium to minimise future fragmentation.
Although figure 2 shows the data arranged contiguousiy, this is not always possible since certain files may be immovable.
Defragmentation of a system's storage medium is often scheduled by the user of the system to run at regular intervals. Sometimes, the defragmentation process may be triggered when the system detects a certain level of fragmentation on the storage medium.
For Random Access Memory, an additional advantage of defragmenting and compacting data is to save power. Large memory arrays are often divided into regions called banks. Each bank can be placed into a low-power data retention mode independently of the other banks. While in this low-power mode the data content is preserved but is not accessible. By compacting the data into the smallest possible number of active banks the number of banks which can be switched to low-power mode is maximized, and therefore the power consumption is minimised.
It is known for mobile computing devices to use system idle time to run RAM defragmentation processes. Similar to file system fragmentation, random access memory containing active processes and working data can become fragmented as data is frequently swapped in and out of RAM. The UK patent application published as GB 2406668 describes using the system idle thread to perform defragmentation of the data stored in the system RAM. One advantage of this method of handling defragmentation is that the defragmentation is only performed whilst the device is idle, which prevents the defragmentation from having any visible effect on the performance of the device.
However, sometimes it is desirable to make the defragmentation higher priority than background, for example to compact memory before switching the entire device to low-power standby mode in order to obtain greatest power saving. Therefore t is desirable that the defragmentation algorithm can be configured for different situations.
SUMMARY OF THE INVENTION
According to a first aspect, the present invention provides a method of compacting data according to claim 1.
According to a second aspect, the present invention provides a system for compacting data according to claim 16.
According to a third aspect, the present invention comprises a method of compacting data according to claim 19 According to a fourth aspect, the present invention provides a method of moving a data fragment according to claim 21.
According to a fifth aspect, the present invention provides a system for moving a data fragment according to claim 24.
According to a sixth aspect, the present invention provides a method of moving a memory page according to claim 25.
According to a seventh aspect, the present invention provides a system for moving a memory according to claim 27.
BRIEF DESCRIPTION OF THE DRAWINGS
A description of a preferred embodiment of the present invention, presented by way of example only, will now be made with reference to the accompanying drawings, wherein like reference numerals refer to like parts, and wherein: figure 1 illustrates a memory containing some fragmented data; figure 2 illustrates the memory of figure 1 in which the fragmented data has been defragmented; figure 3 illustrates a mobile computing device; figure 4 illustrates a memory containing some fragmented data. The memory has been partitioned into banks of varying priority; figure 5 illustrates an embodiment of the invention in which the search system and the defragmentation system are separate threads; figure 6 illustrates a memory containing some fragmented data, wherein each data fragment has been tagged by the search algorithm; figure 7 illustrates the memory of figure 6, wherein the tags have been arranged in an ordered list; figure 8 illustrates an embodiment in which the search algorithm searches the memory banks in order of priority; and figure 9 illustrates an embodiment in which the search algorithm searches the banks of a particular priority one or more times before moving onto the next one.
DETAILED DESCRIPTION
The following description provides various techniques for performing improved RAM compaction.
Figure 3 shows an embodiment of the invention featuring a mobile computing device 100 with a central processing unit (CPU) 10, random access memory 30, and permanent storage medium 40. The CPU, RAM, and permanent storage medium are interconnected by system bus 20. RAM compaction system 50 is a processor thread running on CPU 10 arid has high level access to RAM 30. The RAM compaction system may alternatively be run on a secondary processor or be implemented as a hardware layer between the Cpu and the RAM.
Memory Bank Prioritisation In an embodiment of the invention shown in figure 4, the RAM 30 is spread across multiple memory banks.
Each memory bank is assigned a priority. The priority of the bank indicates the importance of keeping the bank powered up and in use. Lower priority memory banks are the memory banks which the system prefers to use the least, and so will work hardest to empty. The priority of a memory bank may depend, for example, on the energy consumption used by the memory bank or the type of technology used in the memory bank. For example, if the energy consumed by the particular memory bank to retain data is relatively high, it may be assigned a low priority. Similarly, if read and/or write access to the memory bank is relatively slow, it may again be assigned a low priority.
For simplicity, figure 4 shows just three memory banks: a low priority memory bank, a medium priority memory bank and a high priority memory bank.
However, in practice any number of memory banks can be used. The invention is similarly not limited to just three discrete levels of priority. Other numbers of priority levels could be used. in some embodiments, every memory bank is assigned a different priority level (possibly arbitrarily, in the case where two banks are in reality equally important); the unique priority levels simplifying some implementations. In other embodiments, more than one bank may share the same priority level.
If the storage of information in memory is based upon the priority of the available memory banks, with the information stored preferably in high-priority memory banks, low priority memory banks will often be empty and can therefore be powered-off. The operating system memory allocation algorithm will attempt to satisfy any memory request from the highest possible priority bank, moving to lower-priority banks until the total required amount of memory has been allocated. As the memory becomes fragmented it will become more difficult to satisfy memory requests from high priority banks.
Defragmentation will rectify this by sorting the information in memory, based on bank priority. In an embodiment where the most power hungry memory banks are assigned a low priority rating, memory will be moved out of the low priority/power hungry memory banks into high priority/low power usage memory banks, allowing the more power hungry banks to be powered down, minimising the total power consumption of the memory.
Preferably, page tables and page directories, or equivalent structures used by the system Memory Management Unit, are forcibly allocated into high priority banks. This is achieved by searching the high priority banks for a sufficiently large continuous block, moving the contiguous block into lower priority memory, then moving the page tables or page directories into the newly free space in the high priority memory bank. This reduces the likelihood that these structures will need to be moved again when the memory is later defragmented. As access to page tables and directories is particularly critical to the operation of the device, reduced movement of these files will result in a higher device performance.
Separate search system and defragmentation system Figure 5 shows an embodiment of the RAM compaction system 50 in which it is broken down into two components, a search system 110 and a defragmentation system 120. Each of these two components can be individually configured. Search system 110 operates by identifying which memory areas can be moved and how to move them. Defragmentation system 120 operates by determining when and how to run the defragrnentation on the memory areas identified by defragmentation component 120.
Separation of the RAM compaction system into the two components advantageously allows the components to be individually configured and individually optimised. For example, if a newer, more efficient search system for developed, it can be integrated into the into the RAM compaction system without rebuilding the existing defragmentation system. In fact, the defragmentation system need not even be stopped whilst the new search system is introduced. Likewise the defragmentation mechanism can be replaced without changing the search algorithm. For example to provide a hardware-optimized defragmentation or to allow defragmentation at times
other than background idle.
Furthermore, for a parallel processing machine such as a device using a multi-core CPU, the componentisation of the RAM compaction system 50 allows the overall load of compaction to be more easily and effectively distributed, for example across the multiple cores. For example, the search system 110 may use a first core, whilst the defragmentation system may use a second core.
Fast searching using memory tagging and indexing An embodiment of the invention is shown in figure 6. Using a defined search algorithm, search component 110 tags each part of the memory with a flag 810 which indicates whether it is movable or non-movable. The tagging of a part of memory is preferably performed in dependence on the data type stored in each part of the memory and whether it would be suitable for moving.
As shown in figure 7, once the memory has been tagged, the memory tags are compiled into an ordered list 910 within which fast searching can be performed. This allows fragments which are potentially moveable to be quickly identified by defragmentation system 120. The location of the tagged data in memory or the priority of the memory bank in which the tagged data is stored may be used to determine the list order. Preferably, the tagging list is ordered by bank priority and then the sequential order of the page within each bank; this allows the defragmentation component 120 to easily ignore banks where there is little or no benefit to be gained from defragmentation.
Furthermore, apart from searching new data stored in the memory and updating the ordered list to include the new data, no further searching is required.
Preferably the compilation of the ordered list is performed by search system and subsequent access to the ordered list is provided to defragmentation system 120.
The ordered list allows an accelerated defragmentation and compaction as the RAM compaction system may subsequently refer to the index rather than performing a search the physical memory for a movable section of memory.
In one embodiment, the list is stored as a bitmap so that it is faster to scan than a full table of searchable memory areas.
The search algonthm and defragmentation system may be run asynchronously to each other or synchronously. The simpler embodiment is synchronous operation in which the defragmentation system is launched for each fragment found by the search algorithm and the search algorithm waits for completion of the defragmentation before searching for the next movable fragment. The memory fragment being moved is Locked or tagged so that it cannot change type or status between the search and completion of the defragmentation, so that the conditions on which the decision was made to move this fragment does not become invalid when the defragmentation is perfomed. The locking or tagging may be performed in various ways depending on the operating system, two possible ways are to require a mutex to be held to change or move a fragment. An alternative is to allow the type to be changed and check at the and of defragmentation whether this has happened and if so discard the copied fragment and abandon the defragmentation of this single fragment.
Asynchronous operation allows the search and defragmentation to run independently, for example as separate threads or on separate cores on a multi-core system. This has the disadvantage that the list of fragments to move which was compiled by the search algorithm may be invalid when the defragmentation system runs because the status of some or all of the fragments has been changed. This disadvantage can be avoided in a number of ways, a possible implementation is that any operation that can change the type of status of a page, such as a deallocation, reallocation, locking or unlocking, will cause the page to be removed from the list compiled by search algorithm an thus ignored by the defragmentatiori system.
Search A(gorithms Different search algorithms may be used by search component 110 to identify areas of memory suitable to be moved.
In an embodiment of the invention, shown in figure 8, the search algorithm used by search system 110 of the RAM compaction system causes the component to operate according to the following steps: 1. Search the lowest priority memory bank for areas of memory that can be moved into available space in a higher priority memory bank.
2. Once an area of memory is identified as suitable to be moved, the piece of memory is tagged to be moved and the search is continued from that point.
3. Once the bank has been fully searched and no new memory has been identified as being suitable to be moved, the search moves on to the bank with the next highest priority. This continues until the bank with the highest priority is reached.
4. When the bank with the highest priority is reached, the search is reset to the bank with the lowest priority.
Preferably, the highest priority bank is not searched because there is no bank of higher priority that its contents can usefully be moved to -i.e. moving an area of memory out of the highest priority bank will never result in an increase in compaction. This is included in the steps above.
In a refinement to the above embodiment, the search system 110 may skip all banks which include immovable areas of memory. This is because it will not be possible to power-down a bank as long as it includes an immovable area and there is therefore little to be gained by otherwise emptying the bank. A better solution may be to increase the priority of such banks, to ensure that they are filled in preference to other banks. If, on the other hand, it is expected that the bank will in the future contain no immovable areas, the bank may be searched (and, optionally, de-prioritised) in order to ensure it is kept as empty as possible so that it can be powered-down as soon as possible after the last immovable area ceases to be.
The search algorithm of this embodiment causes search component 110 to do a sweep, searching from to low to high priority memory banks, and resetting to the lowest priority bank once it has reached the highest priority bank.
Search component 110 preferably stores its search position when interrupted so that it can return to that position when the search is resumed.
Once an area of memory has been identified as moveabte, it is desirable to move that area to as high priority bank as possible. Therefore, only banks with a higher priority than the source bank should be considered. Preferably, the highest priority target location is found using the kernel allocator's normal page allocation function. If this is not possible, a separate search for a suitable target location can be made.
In an alternative embodiment, the search algorithm causes search system to repeatedly search the lowest priority banks until they become completely free, before switching to a higher priority set of banks. This embodiment is shown in figure 9. In this embodiment, the search component of the RAM compaction system 50 to operates according to the following steps: 1. Search the lowest priority memory bank for areas of memory that can be moved into available space in a higher priority memory bank.
2. Once an area of memory is identified as suitable to be moved, the piece of memory is tagged to be moved and the search is reset to the beginning of the lowest priority bank.
3. Once a bank has been fully searched and no new memory has been identified as being suitable to be moved, the search moves on to the bank with the next highest priority. This continues until the bank with the highest priority is reached.
4. When the highest priority bank is reached, the search is reset to the lowest priority bank.
As previously, these steps represent a preferable embodiment in which the highest priority bank is never searched.
The search algorithm of this latter embodiment ensures that the search component 110 will always find any low-priority memory contents that are suitable for moving. Whilst it may result in the search component 110 repeatedly scanning low-priority banks to pick up only a small amount of movable memory each time (especially if the search is frequently interrupted), this approach has the advantage of keeping the lowest-priority banks as free as possible.
In another embodiment of the invention, the search algorithm causes search component 110, one complete a full search sweep from the low-priority banks to the high-priority banks is performed first of all, before the search returns to the low-priority bank and proceeds as in either of the above embodments.
New allocations in the low-priority bank will not be moved until a complete scan has been done, but this is outweighed by the benefit that this algorithm maximises the amount of defragmentation that can be done with a single visit to each bank and so is more efficient overall. The reason for this efficiency is that when a single sweep is used the search system 110 is less likely to become stuck' in a loop where a bank is cleared, then memory is allocated within it, then the bank is cleared again, and so on, and the bank oscillates between active and inactive.
The above embodiments represent preferred behaviours for the search system 110, but it will be appreciated that others are possible and the choice of a particular behaviour will vary from application to application. For example, in one embodiment, the search system 110 is directed by the device to repeatedly sweep a specific bank with a view to clearing (and therefore powering-down) that particular bank as sono as possible.
In one embodiment of the invention, the search algorithm is configured to predict which sections of memory are likely to become unmovable at a later time and mark these sections as unmovable. For example, memory which can be accessed by physical address or by. a virtual mapping of physical RAM, whether it is being currently used or not, might be considered to be an unsuitable candidate for moving. Therefore, this memory can be marked as non-movable'.
Safe interrupt handling A typical computer system comprises a main operating system (OS) with a processing unit and an interrupt handler. Processes running on the computer system usually access the physical memory of a device by means of a virtual address provided by the operating system that maps to the physical address.
Data fragments may be moved by moving the data fragment from one area of physical memory to another and then, using a memory management unit (MMU) of the computer system's operating system to, altering the virtual memory address for the data fragment to point to the new physical address.
This leaves the virtuaJ memory looking unchanged to the processes running on the OS.
The interrupt handler provides the mechanism for a process running on the processing unit to be interrupted by an interrupt signal from hardware or another software process which indicates the need for attention. A system which incorporates interrupts is known as an interrupt driven system. The interrupt handler features a mechanism which can detect whether an interrupt occurred during any specified time period.
When a fragment is being moved, there must be some way to prevent the data within it changing during the move otherwise the resulting copy could contain a mix of old and new data. One method, where the hardware supports it, is to use the Memory Management Unit (MMU) to mark the page in such a way that an access to it from any code other than the compaction code will result in a CPU exception or interrupt which can be used to abort the copy.
The advantage of this implementation is that the move is only aborted if the source page was genuinely accessed. The disadvantage is that it is complex to implement, requires support from the OS and has a high overhead in the case where the memory is accessed during defragmentation.
An embodiment of the invention provides an alternative method of moving memory pages that does not depend on MMU locking of the page or have a high overhead if the page is accessed during compaction. When moving pages of memory, a temporary mapping is made between the source and the destination page. The data segment which the RAM compaction system is attempting to move is copied by the RAM compaction system from the source page on the memory storage to the destination page. This copy action is performed with interrupts enabled. Once the copy action is complete, interrupts are disabled and the interrupt handler of the operating system is queried to determine if any interrupts occurred during the period of time in which the copy action was being performed of the type that would suggest that the source page might have been modified. If such an interrupt did occur, a worst case' scenario in which the source page has been altered is assumed. The destination page is designated as empty and the move is abandoned. Otherwise, the page mapping is changed to point to the destination page instead of the source page, the source page is designated as empty and the temporary mapping is removed.
This method of ensuring data integrity requires minimal overheads and reduces complexity. Compared to an implementation that relies upon the MMU to mark areas of memory as inaccessible whilst it is being moved, this method may have a higher degree of false abortion of moves -but it is simpler to implement and has a lower overhead. It is also usable on systems which do not have a full MMU, or can not selectively protect just small regions of memory. Another advantage is that it allows memory that is accessed in interrupts to be moved because compaction does not impede the normal operation of other code -this would not be possible with the MMU-based locking scheme because that could result in an interrupt causing an exception which is usually a fatal event and will at least incur a large time overhead on servicing the interrupt.
In an alternative embodiment, the move is abandoned if an interrupt of any type occurs. This further reduces complexity and therefoie some overheads.
Object orientated memory management As described above, data fragments may be moved by moving the data fragment from one area of physical memory to another and then using the MMU to alter the virtual memory address for the data fragment to point to the new physical address. In reality, there may be other consequences of moving physical memory around which need to be addressed. For example, consider demand paging schemes, in which the system only copies the memory page into main memory if a process attempts to access it and may need to keep a list of the actual physical pages being paged in and out.
In one embodiment of the invention, the compaction system uses object orientated function calling to allow an object to handle the moving of a page associated to it in the manner most appropriate to the object. Each type of data stored in the memory pages has an associated object which contains the methods to handle that type of page with respect to compaction. For example, if the compaction system has moved a memory page which belongs to a first object, it calls a function in the first object to swap the link to the old page for the new one. The method in the first object will know exactly how to handle the page type associated with the first object, including any unusual considerations that need to be made when moving such a page.
Preferably, the calling of the appropriate object is only performed by the defragmentation component 120 once the region of memory is successfully moved. The defragmentation component 120 does not need to consider the details of moving specific data types; it simply calls an object to notify the successful move.
In one embodiment, the object' associated with a particular data type is a C++ class.
The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein, and without limitation to the scope of the claims. The applicant indicates that aspects of the present invention may consist of any such individual feature or combination of features. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention.

Claims (6)

  1. CLAiMS 1. A method of compacting data in a memory resource comprising a plurality of memory banks, the method comprising the steps of: assigning a priority to each memory bank; identifying at least one data fragment in the memory resource that are suitable for moving between the memory banks; moving the identified data fragments into a higher priority memory bank, if there is space in the higher priority bank to do so.
  2. 2. The method of claim 1, comprising powering-down a memory bank once it has been emptied.
  3. 3. The method of any preceding claim, wherein the priority of a bank is dependent upon its power consumption.
  4. 4. The method of any preceding claim, wherein the priority of a bank is dependent upon at least one of the read and write access speeds of the bank.
  5. 5. The method of any preceding claim, wherein the priority of a bank is dependent upon the moveability of data fragments already stored in the bank.
  6. 6. The method of any preceding claim, wherein the step of identifying data fragments suitable for moving comprises searching the memory banks of the memory resource in an order determined by the priority associated with each bank.10. The method of any preceding claim, wherein the step of identifying data fragments in the memory resource suitable for moving comprises searching the memory banks in order of increasing priority.11. The method of any preceding claim, wherein the highest priority memory bank is not searched for movable data fragments.12. The method of any preceding claim, wherein any memory bank already known to contain immovable data fragments is not searched for movable data fragments.13. The method of any preceding claim, wherein a memory resource is suitable for moving if it is currently moveable and is expected to remain moveable.14. The method of any preceding claim, wherein a memory resource is suitable for moving if it cannot be accessed by the processing unit using a physical address.15. A memory compaction system for compacting data in a memory resource comprising multiple memory banks, the system being configured to perform the method of any preceding claim.16. A system for compacting data in a memory resource, the system comprising: a search system configured to identify data fragments in the memory resource that are suitable for moving; and a defragmentation system configured to move data fragments that have been identified by the search component in a manner that compacts data within the data source, wherein the searching and defragmentation systems operate synchronously with one another.17. The system of claim 16, wherein compacting data comprises defragmenting the data.18. The system of claim 16 or claim 17, configured to perform the method of any of claims 1 to 14.19. A system for compacting data in a memory resource, the system comprising: a search system configured to identify data fragments in the memory esource that are suitable for moving; and a defragmentation system configured to move data fragments that have been identified by the search component in a manner that compacts data within the data source, wherein the searching and defragmentation systems operate asynchronously with one another.20. The system of claim 19, wherein compacting data comprises defragmenting the data.21. The system of claim 19 or claim 20, configured to perform the method of any of claims 1 to 14.22. The system of any of claims 19 to 21, wherein the action of changing the type or status of a page of identified data prevents that page from being moved by the defragmentation system.23. A method of compacting data in a memory resource, the method comprising: identifying data fragments in the memory resource that are suitable for moving; and moving identified data fragments in a manner that compacts data within the data source, wherein the identifying and moving steps are performed synchronously with one another.24. The method of claim 23, further comprising the method of any of claims 1 to 14.25. A method of compacting data in a memory resource, the method comprising: identifying data fragments in the memory resource that are suitable for moving; and moving identified data fragments in a manner that compacts data within the data source, wherein the identifying and moving steps are performed asynchronously with one another.26. The system of any of claims 25, further including: detecting a change in the type or state of a page of data; and preventing an identified page of data from being moved in response to the detection.27. The method of claim 26, further comprising the method of any of claims 1 to 14.28. A method of moving a data fragment from a source location to a destination location within a memory resource of an interrupt-driven processing unit, the method comprising: copying the data fragment from the source location to the destination location; monitoring whether any interrupts occur during the copying operation which suggest that the data fragment at the source location has been modified during the copy; if an interrupt indicating modification did occur, aborting the move operation; if an interrupt indicating modification did not occur, finalising the move operation.29. The method of claim 28, wherein: aborting the move operating comprises indicating that the memory at the destination location is unused; and finalising the move operation comprises indicating that the data fragment can be found at the destination location and that the source destination is unused.30. The method of any of claims 1-12, wherein the moving step is performed according to any of claims 28-29.31. A system for moving a data fragment from a source location to a destination location within a memory resource of an interrupt-driven processing unit, the method configured to perform the method of any of claims 28 to 30.32. A method of moving a memory page from a source location to a destination location within a memory resource of an object orientated computer system, the method comprising the steps of: identifying type of data comprised in the memory page, copying the memory page from the source location to the destination location, subsequent to the copying operation, calling a page move handling function of an object of the object orientated computer system, wherein the object used is determined by the identified type of data comprised in the memory page and the page move handling function of the object is configured to perform any remaining steps for the execution of the move of a page of the identified type.33. The method of claim 32 wherein the object is a C++ class.34. A system for moving a memory page from a source location to a destination location within a memory resource of an object orientated computer system, the system configured to: identify type of data comprised in the memory page, copy the memory page from the source location to the destination location, subsequent to the copying operation; calling a page move handling of an object of the object orientated computer system, wherein the object used is determined by the identified type of data comprised in the memory page and the page move handling function of the object is configured to perform any remaining steps for the execudon of the move of a page of the identified type.
GB0823041A 2008-12-17 2008-12-17 Memory defragmentation and compaction into high priority memory banks Withdrawn GB2466264A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB0823041A GB2466264A (en) 2008-12-17 2008-12-17 Memory defragmentation and compaction into high priority memory banks
PCT/IB2009/055574 WO2010070529A2 (en) 2008-12-17 2009-12-08 A method, apparatus and computer program for moving data in memory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0823041A GB2466264A (en) 2008-12-17 2008-12-17 Memory defragmentation and compaction into high priority memory banks

Publications (2)

Publication Number Publication Date
GB0823041D0 GB0823041D0 (en) 2009-01-28
GB2466264A true GB2466264A (en) 2010-06-23

Family

ID=40343767

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0823041A Withdrawn GB2466264A (en) 2008-12-17 2008-12-17 Memory defragmentation and compaction into high priority memory banks

Country Status (2)

Country Link
GB (1) GB2466264A (en)
WO (1) WO2010070529A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120124269A1 (en) * 2010-11-15 2012-05-17 International Business Machines Corporation Organizing Memory for Effective Memory Power Management
US8819379B2 (en) 2011-11-15 2014-08-26 Memory Technologies Llc Allocating memory based on performance ranking
CN107193753A (en) * 2017-06-16 2017-09-22 深圳市万普拉斯科技有限公司 Internal memory reforming method, device, electronic equipment and readable storage medium storing program for executing

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015135506A1 (en) * 2014-03-13 2015-09-17 Mediatek Inc. Method for controlling memory device to achieve more power saving and related apparatus thereof
US10002072B2 (en) 2015-05-18 2018-06-19 Mediatek Inc. Method and apparatus for controlling data migration in multi-channel memory device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003108317A (en) * 2001-09-27 2003-04-11 Fujitsu Ltd Storage system
WO2005069148A2 (en) * 2004-01-13 2005-07-28 Koninklijke Philips Electronics N.V. Memory management method and related system
JP2006172355A (en) * 2004-12-20 2006-06-29 Nec Corp Magnetic disk device and control method
WO2006123140A1 (en) * 2005-05-18 2006-11-23 Symbian Software Limited Memory management in a computing device
JP2007128126A (en) * 2005-11-01 2007-05-24 Matsushita Electric Ind Co Ltd Information processor
WO2007072435A2 (en) * 2005-12-21 2007-06-28 Nxp B.V. Reducingthe number of memory banks being powered
EP1808758A2 (en) * 2005-12-16 2007-07-18 Hitachi, Ltd. Storage controller and method of controlling the same
WO2007090195A1 (en) * 2006-02-01 2007-08-09 Qualcomm Incorporated Reducing power consumption by disabling refresh of unused portions of dram during periods of device inactivity

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2406668B (en) * 2003-10-04 2006-08-30 Symbian Ltd Memory management in a computing device
US20080005516A1 (en) * 2006-06-30 2008-01-03 Meinschein Robert J Memory power management through high-speed intra-memory data transfer and dynamic memory address remapping

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003108317A (en) * 2001-09-27 2003-04-11 Fujitsu Ltd Storage system
WO2005069148A2 (en) * 2004-01-13 2005-07-28 Koninklijke Philips Electronics N.V. Memory management method and related system
JP2006172355A (en) * 2004-12-20 2006-06-29 Nec Corp Magnetic disk device and control method
WO2006123140A1 (en) * 2005-05-18 2006-11-23 Symbian Software Limited Memory management in a computing device
JP2007128126A (en) * 2005-11-01 2007-05-24 Matsushita Electric Ind Co Ltd Information processor
EP1808758A2 (en) * 2005-12-16 2007-07-18 Hitachi, Ltd. Storage controller and method of controlling the same
WO2007072435A2 (en) * 2005-12-21 2007-06-28 Nxp B.V. Reducingthe number of memory banks being powered
WO2007090195A1 (en) * 2006-02-01 2007-08-09 Qualcomm Incorporated Reducing power consumption by disabling refresh of unused portions of dram during periods of device inactivity

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
De La Luz V, Kandemir M, Kolcu I; Automatic data migration for reducing energy consumption in multi-bank memory systems *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120124269A1 (en) * 2010-11-15 2012-05-17 International Business Machines Corporation Organizing Memory for Effective Memory Power Management
US8819379B2 (en) 2011-11-15 2014-08-26 Memory Technologies Llc Allocating memory based on performance ranking
US9069663B2 (en) 2011-11-15 2015-06-30 Memory Technologies Llc Allocating memory based on performance ranking
CN107193753A (en) * 2017-06-16 2017-09-22 深圳市万普拉斯科技有限公司 Internal memory reforming method, device, electronic equipment and readable storage medium storing program for executing
CN107193753B (en) * 2017-06-16 2020-08-04 深圳市万普拉斯科技有限公司 Memory reforming method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
GB0823041D0 (en) 2009-01-28
WO2010070529A3 (en) 2010-08-12
WO2010070529A2 (en) 2010-06-24

Similar Documents

Publication Publication Date Title
JP5466568B2 (en) Resource management method, resource management program, and resource management apparatus
KR100390616B1 (en) System and method for persistent and robust storage allocation
AU2012352178B2 (en) Working set swapping using a sequentially ordered swap file
US8176233B1 (en) Using non-volatile memory resources to enable a virtual buffer pool for a database application
EP0230354B1 (en) Enhanced handling of large virtual storage extents
US9298377B2 (en) Techniques for reducing read I/O latency in virtual machines
JP3771803B2 (en) System and method for persistent and robust memory management
TW201301030A (en) Fast translation indicator to reduce secondary address table checks in a memory device
US8868622B2 (en) Method and apparatus for allocating resources in a computer system
CN109857677B (en) Distribution method and device of kernel stack
CN103558992A (en) Off-heap direct-memory data stores, methods of creating and/or managing off-heap direct-memory data stores, and/or systems including off-heap direct-memory data store
US7395285B2 (en) Garbage collection system
KR20100132244A (en) Memory system and method of managing memory system
GB2466264A (en) Memory defragmentation and compaction into high priority memory banks
Venkatesan et al. Ex-tmem: Extending transcendent memory with non-volatile memory for virtual machines
US20120239884A1 (en) Memory control device, memory device, memory control method, and program
WO2017028909A1 (en) Shared physical registers and mapping table for architectural registers of multiple threads
JP5334048B2 (en) Memory device and computer
Lee et al. WALTZ: Leveraging zone append to tighten the tail latency of LSM tree on ZNS SSD
US7085888B2 (en) Increasing memory locality of filesystem synchronization operations
JP5505195B2 (en) Memory control device and control method
CN117093508B (en) Memory resource management method and device, electronic equipment and storage medium
Xiao et al. Tnvmalloc: A thread-level-based wear-aware allocator for nonvolatile main memory
Xu et al. I/O Transit Caching for PMem-based Block Device
CN107066624B (en) Data off-line storage method

Legal Events

Date Code Title Description
COOA Change in applicant's name or ownership of the application

Owner name: NOKIA CORPORATION

Free format text: FORMER OWNER: SYMBIAN SOFTWARE LTD

WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)