WO2005069148A2 - Memory management method and related system - Google Patents

Memory management method and related system Download PDF

Info

Publication number
WO2005069148A2
WO2005069148A2 PCT/IB2005/050123 IB2005050123W WO2005069148A2 WO 2005069148 A2 WO2005069148 A2 WO 2005069148A2 IB 2005050123 W IB2005050123 W IB 2005050123W WO 2005069148 A2 WO2005069148 A2 WO 2005069148A2
Authority
WO
WIPO (PCT)
Prior art keywords
memory
devices
data
frequently accessed
accessed data
Prior art date
Application number
PCT/IB2005/050123
Other languages
French (fr)
Other versions
WO2005069148A3 (en
Inventor
Richard M. Miller-Smith
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2005069148A2 publication Critical patent/WO2005069148A2/en
Publication of WO2005069148A3 publication Critical patent/WO2005069148A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0215Addressing or allocation; Relocation with look ahead addressing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/122Replacement control using replacement algorithms of the least frequently used [LFU] type, e.g. with individual count value
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1028Power efficiency
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present invention relates to a memory management arrangement and related method for use in particular in assisting in the reduction of power consumption within portable electronic devices.
  • RAM Random Access Memory
  • the standard states employed when the device has its CPU running is either on and in use, or on but not in use.
  • the standby, or self-refresh, mode is provided so as to maintain data within the SDRAM when no accesses are required and in a manner exhibiting minimum power consumption.
  • the CPU can switch the memory from one of its "on" modes into such a standby mode when it is known that no accesses will be required thereto.
  • a plurality of SDRAM devices can be used and the number employed is commonly dictated by the width of the CPU bus.
  • Known computer systems can contain a hard disk, or other large storage array, on which data can be stored.
  • the CPU has a memory management unit and is running an operating system with the relevant software these systems can implement virtual memory. This is where blocks of data, called pages, are stored to disk, once empty space in the systems RAM becomes sparse. Furthermore, in addition to switching the SDRAM devices into a self- refresh standby mode when power saving is required, known memory management systems also exhibit a suspend, or hibernate, mode within which the content of the memory, and data relating to the state of the CPU, is written out to a hard disk for temporary storage therein. Once written to the hard disk, the CPU and SDRAM devices can then be switched off so as to reduce power consumption although these elements are of course initialised with the data saved on the hard disk at a later stage when operation is again required.
  • Such known systems therefore generally employ a memory which is tracked by the operating system of the CPU.
  • the CPU employs a memory management unit to map the usage of small segments of memory and attribute these to the processes running on the CPU.
  • the use of a hard disk in the manner noted above leads to the provision of a virtual memory for the CPU.
  • Such procedures can serve to reduce the power consumption required within the associated device, such reduction in power consumption is achieved merely on the basis on the deactivation of portions of the device, i.e. when such portions are in a standby, or power-down mode. It would therefore be also advantageous to provide for reduced power consumption when SDRAM devices are required to be active.
  • US-A-5860106 discloses a memory management arrangement in which memory access activity is monitored in an attempt to predict likely future activity and so thereby dynamically enable and disable components from the memory sub-system on the basis of such predicted future requirements to thereby achieve power-saving.
  • this arrangement attempts to achieve a reduction in power consumption on an access by access basis, by altering the behaviour of the memory controller after each access and so the degree of available power-saving, is disadvantageously restricted.
  • the present invention seeks to provide for a memory management method and related apparatus having advantages over known such methods and apparatus.
  • a memory management method for use with a memory arrangement comprising a plurality of memory devices which provide for a plurality of memory locations for the storage of data to be accessed, the method including the step of identifying and moving more frequently accessed data to common devices of the said plurality of any devices so as to reduce the number of the said plurality of devices in which the said more frequently accessed data is stored, thereby allowing for the power-saving mode to be initiated at memory devices within the said plurality of memory devices not including the said more frequently accessed data.
  • the invention is particularly advantageous in reducing the power required by a plurality of SDRAM devices while the said plurality can still offer the same functionality and performance to the CPU.
  • the method includes the step of remapping the locations containing the more frequently accessed data to the said common storage devices Further, the method may also include the steps of developing a list of contiguous memory regions within which the more frequently accessed data is located.
  • the size of the contiguous memory region is identical to the page size used by the CPU and operating system. This allows the CPU to maintain an "active page list".
  • a CPU associated with the arrangement is extended in order to provide definite details of the location of memory accesses.
  • the CPU extension occurs in the CPU's memory management unit, preferably the translation look-aside buffer.
  • a counter is preferably incremented each time an access through the translation look-aside buffer entry is made.
  • all translation look-aside buffer entries are read in order to read, and reset, counters associated with each of the translation look-aside buffer entries.
  • the counter values can be stored back into a table stored in memory at the instant at which a memory location is unloaded from the TLB. The step of parsing these written back values, by the CPU, to create the active page list is included.
  • the method can include the step of periodically reading CPU cache tags associated with the stored data and in order to identify the said more frequently accessed data. In a virtual memory system we can extend the method of swapping non- frequently accessed data to a hard disk.
  • the SDRAM in self-refresh mode can be seen as an intermediary temporary storage device, which has relatively fast access, but uses very little power.
  • the stored data can then be swapped in and out of the hard disk either directly or via the intermediary temporary storage device depending upon the access history of the memory pages and the current access requirement of such pages.
  • the invention advantageously dynamically alters the amount of, for example, SDRAM immediately accessible at any particular time. Power-savings can therefore be achieved at a lower level than is known in the prior-art by placing the SDRAM devices into a low- power mode or switching them off should they be devoid of any of the more frequently accessed data.
  • the memory control of the present invention can therefore advantageously behave consistently, irrespective of the number of
  • a memory management system for use with a memory arrangement comprising a plurality of memory devices which provide for a plurality of memory locations for the storage of data to be accessed, the system including means for identifying more frequently access data stored over the said plurality of memory devices and for moving the said more frequently access data to common devices within the said plurality of memory devices so as to decrease the number of the said plurality of the devices in which more frequently accessed data is stored, and further including means for placing the plurality of memory devices that are then without the said more frequently accessed data into a power-saving mode.
  • the system can advantageously include features arranged to provide for the further steps noted above. It will therefore be appreciated that the present invention can provide for a processor that allows for its connect to a SDRAM device to be controlled such that different SDRAM devices within the memory arrangement can be placed into appropriate different power states dependent upon the activity level of the memory pages stored therein. Power reduction can therefore advantageously be achieved through reduction of the number of SDRAM devices that are required to be fully powered-up. In particular, all unused memory devices can be placed in an off state, while the most frequently required pages are grouped together and located in the fast powered-up memory devices.
  • Fig. 1 is a schematic block diagram illustrating a plurality of memory devices controlled in accordance with the embodiment of the present invention
  • Fig. 2 is a schematic block diagram illustrating the operation of an embodiment of the present invention also employing virtual memory space in the form of a hard disk.
  • the present invention follows from the realisation that, in order to assist in minimisation of the power requirements within the device, the CPU should advantageously minimise the power requirements of the SDRAM devices during operation.
  • the CPU is very unlikely to require access to all locations of all pages of the available memory although there will be a relatively small number of active processes and tasks being run and the remaining memory pages not involved in such processes and tasks will generally be dormant awaiting their next activation.
  • the present invention seeks to effectively remap the active pages so that they are physically located within common devices so as to require the smallest possible space and therefore the smallest number of memory devices for the required operation.
  • the SDRAM chips will not contain any such active pages and so, not requiring such, or indeed any, access by the CPU, those devices can be put into a self-refresh or even powered-down mode so as to reduce the power consumption of the memory arrangement. Since no active pages are now mapped to these devices, their powering-down will not have an adverse effect on functionality nor speed of the memory arrangement.
  • a memory arrangement 10 comprising four memory devices in the form of SDRAM chips 12, 14, 16 and 20.
  • the memory pages have been remapped such that the more frequently accessed pages, referred to herein as the active pages, are remapped to, as far as possible, common memory devices so as to limit the total number of devices over which the active pages are spread.
  • active pages have been removed from the SDRAM chips
  • the SDRAM chip 14 is full of used pages but, since these are not active, the device 14 can be placed in a self- refresh mode.
  • the SDRAM chip 16 is almost full with used pages but again, such used pages are not active, and the device 16 can likewise be placed in a self-refresh mode.
  • a process may start, stop or become active, and this may alter the list of frequently accessed data and the active page list should change to reflect this, for example, if a page in a powered-down chip is required, that chip is then powered-up for access.
  • the chip is only powered down after a set time has elapsed with no accesses to it having been made.
  • the CPU runs a background task, it is ideally the responsibility of this task to shift the data contained in the active pages such that they are packed into the smallest possible space. This requires data to be swapped between a page that has left the active page list with one that has been added. Alternatively, the active page list has to increase in size.
  • the task updates the operating system's memory management maps and tables to reflect the change.
  • a particular requirement of the invention is to provide an accurate list of currently active pages.
  • the method adopted to achieve this is dependent on the capabilities of the CPU used in the system.
  • the operating system will be extended such that it tracks the usage of the pages over a period of time so as to build a sorted list of the pages used in the system.
  • the CPU can be extended in order to give definite details of page access. Such an extension would most likely occur in the Translation Look-aside Buffer (TLB). This could be extended such that each time an access through a particular entry is taken a counter associated with that entry is incremented.
  • TLB Translation Look-aside Buffer
  • the counter can advantageously be read when an entry is unloaded and the page list updated accordingly.
  • the operating system then also regularly probes all TLB entries to read, and reset, the counters for other TLB entries. This can prove important since a number of pages will be so popular they will not leave the TLB.
  • the CPU is arranged with the capability to read the TLB entries. In this case there is no hardware counter required in the TLB. However, entries in the TLB are permitted to be read. In this alternative arrangement, the operating system would regularly read the TLB entries. This gives an indication of which pages are currently being accessed. Over a period of time this will also give a good statistical list of the most active pages.
  • the CPU is arranged with the capability to read the cache tags.
  • the virtual memory of the system can be extended. While a virtual memory can store pages in a "swap space" on a hard disk, instead of swapping directly to and from the hard disk, an intermediary store is used.
  • This intermediary storage device advantageously comprises RAM in self-refresh mode. When required the memory pages are swapped in and out of this intermediary RAM which offers very fast access.
  • FIG. 2 which illustrate a memory arrangement 20 comprising a set of SDRAM chips 22 containing active pages, a hard disk 24 and intermediary SDRAM chips 26 forming intermediary swap space 26.
  • the Active Pages 22 become full, pages begin to be placed in the RAM swap space.
  • both the active pages 22 and RAM swap space 26 are full, pages can then begin to be placed on the hard disk 24.
  • the operating system is arranged to select a target page and the data can then be swapped.
  • this data is loaded into the active pages 22 and the target page is then placed into the RAM swap space 26 thereby replacing a page which has been written into the hard disk page store.
  • Such arrangements allow the system to dynamically alter the power usage. This can be done by altering the size of the active page list. If, of course, the system is powered by a mains connection, all RAM can be switched on and used at full speed. When not connected to a mains supply, the system can advantageously adapt itself so that it uses less and less power as the battery drains.

Abstract

The present invention provides for a memory management method and related system for use with a memory arrangement comprising a plurality of memory devices such as SDRAM chips which provide define a plurality of memory locations for the storage of data to be accessed, the method including the step of moving more frequently access data to common devices of the said plurality of memory devices so as to reduce the number of the said plurality of devices in which the more frequently accessed data is stored, thereby allowing for the power-saving mode to be initiated at memory devices within the said plurality of memory devices not including the said more frequently accessed data. Through such re-mapping of active memory pages into a reduced number of devices, as determined by the frequency with which the data is required, memory devices that are then devoid of such active page can be placed into, for example, a self-refresh or powered-down mode.

Description

DESCRIPTION
MEMORY MANAGEMENT METHOD AND RELATED SYSTEM The present invention relates to a memory management arrangement and related method for use in particular in assisting in the reduction of power consumption within portable electronic devices.
Currently available portable devices such as Personal Digital Assistance (PDAs), digital cameras and solid-state music players generally require a large amount of electronic storage. Such requirements are commonly met by the use of FLASH memory, a hard disk or Random Access Memory (RAM). RAM proves particularly advantageous in view of its increasing cost effectiveness and fast access times both for read and write access. In known devices employing RAM, the majority of the memory is provided in the form of SDRAM requiring a refresh process whereby the charge effectively stored as each bit within the SDRAM is reset to the required level for its state. Commonly, a SDRAM device has four different states, namely: on and in use - for example in which the CPU is sending or receiving commands or data; on, but not in use; in standby mode i.e. self-refresh mode; or powered down in which mode data is no longer stored in the chip. The standard states employed when the device has its CPU running is either on and in use, or on but not in use. The standby, or self-refresh, mode is provided so as to maintain data within the SDRAM when no accesses are required and in a manner exhibiting minimum power consumption. In particular, the CPU can switch the memory from one of its "on" modes into such a standby mode when it is known that no accesses will be required thereto. In order to expand possible memory space, a plurality of SDRAM devices can be used and the number employed is commonly dictated by the width of the CPU bus. Known computer systems can contain a hard disk, or other large storage array, on which data can be stored. If the CPU has a memory management unit and is running an operating system with the relevant software these systems can implement virtual memory. This is where blocks of data, called pages, are stored to disk, once empty space in the systems RAM becomes sparse. Furthermore, in addition to switching the SDRAM devices into a self- refresh standby mode when power saving is required, known memory management systems also exhibit a suspend, or hibernate, mode within which the content of the memory, and data relating to the state of the CPU, is written out to a hard disk for temporary storage therein. Once written to the hard disk, the CPU and SDRAM devices can then be switched off so as to reduce power consumption although these elements are of course initialised with the data saved on the hard disk at a later stage when operation is again required. Such known systems therefore generally employ a memory which is tracked by the operating system of the CPU. The CPU employs a memory management unit to map the usage of small segments of memory and attribute these to the processes running on the CPU. The use of a hard disk in the manner noted above leads to the provision of a virtual memory for the CPU. However, while such procedures can serve to reduce the power consumption required within the associated device, such reduction in power consumption is achieved merely on the basis on the deactivation of portions of the device, i.e. when such portions are in a standby, or power-down mode. It would therefore be also advantageous to provide for reduced power consumption when SDRAM devices are required to be active. US-A-5860106 discloses a memory management arrangement in which memory access activity is monitored in an attempt to predict likely future activity and so thereby dynamically enable and disable components from the memory sub-system on the basis of such predicted future requirements to thereby achieve power-saving. However, the manner in which this arrangement attempts to achieve a reduction in power consumption on an access by access basis, by altering the behaviour of the memory controller after each access and so the degree of available power-saving, is disadvantageously restricted.
The present invention seeks to provide for a memory management method and related apparatus having advantages over known such methods and apparatus.
According to a first aspect of the present invention there is provided a memory management method for use with a memory arrangement comprising a plurality of memory devices which provide for a plurality of memory locations for the storage of data to be accessed, the method including the step of identifying and moving more frequently accessed data to common devices of the said plurality of any devices so as to reduce the number of the said plurality of devices in which the said more frequently accessed data is stored, thereby allowing for the power-saving mode to be initiated at memory devices within the said plurality of memory devices not including the said more frequently accessed data. The invention is particularly advantageous in reducing the power required by a plurality of SDRAM devices while the said plurality can still offer the same functionality and performance to the CPU. Preferably the method includes the step of remapping the locations containing the more frequently accessed data to the said common storage devices Further, the method may also include the steps of developing a list of contiguous memory regions within which the more frequently accessed data is located. Preferably, the size of the contiguous memory region is identical to the page size used by the CPU and operating system. This allows the CPU to maintain an "active page list". In particular, a CPU associated with the arrangement is extended in order to provide definite details of the location of memory accesses. Advantageously, the CPU extension occurs in the CPU's memory management unit, preferably the translation look-aside buffer. A counter is preferably incremented each time an access through the translation look-aside buffer entry is made. Preferably, all translation look-aside buffer entries are read in order to read, and reset, counters associated with each of the translation look-aside buffer entries. As a further feature, the counter values can be stored back into a table stored in memory at the instant at which a memory location is unloaded from the TLB. The step of parsing these written back values, by the CPU, to create the active page list is included. As an alternative, the method can include the step of periodically reading CPU cache tags associated with the stored data and in order to identify the said more frequently accessed data. In a virtual memory system we can extend the method of swapping non- frequently accessed data to a hard disk. The SDRAM in self-refresh mode can be seen as an intermediary temporary storage device, which has relatively fast access, but uses very little power. The stored data can then be swapped in and out of the hard disk either directly or via the intermediary temporary storage device depending upon the access history of the memory pages and the current access requirement of such pages. In controlling the number of the plurality of memory devices over which the more frequently accessed data is stored, the invention advantageously dynamically alters the amount of, for example, SDRAM immediately accessible at any particular time. Power-savings can therefore be achieved at a lower level than is known in the prior-art by placing the SDRAM devices into a low- power mode or switching them off should they be devoid of any of the more frequently accessed data. The memory control of the present invention can therefore advantageously behave consistently, irrespective of the number of
SDRAM devices that remain active at any particular time. The grouping of the stored data is therefore advantageously determined on the basis of the frequency with which the data is accessed. According to another aspect of the present invention there is provided a memory management system for use with a memory arrangement comprising a plurality of memory devices which provide for a plurality of memory locations for the storage of data to be accessed, the system including means for identifying more frequently access data stored over the said plurality of memory devices and for moving the said more frequently access data to common devices within the said plurality of memory devices so as to decrease the number of the said plurality of the devices in which more frequently accessed data is stored, and further including means for placing the plurality of memory devices that are then without the said more frequently accessed data into a power-saving mode. The system can advantageously include features arranged to provide for the further steps noted above. It will therefore be appreciated that the present invention can provide for a processor that allows for its connect to a SDRAM device to be controlled such that different SDRAM devices within the memory arrangement can be placed into appropriate different power states dependent upon the activity level of the memory pages stored therein. Power reduction can therefore advantageously be achieved through reduction of the number of SDRAM devices that are required to be fully powered-up. In particular, all unused memory devices can be placed in an off state, while the most frequently required pages are grouped together and located in the fast powered-up memory devices. The operating system of the CPU is advantageously therefore arranged to track the particular usage of memory pages and provide for the appropriate re-mapping of the memory pages as and when required The invention is described further hereinafter, by way of example only, with reference to the accompanying drawings in which: Fig. 1 is a schematic block diagram illustrating a plurality of memory devices controlled in accordance with the embodiment of the present invention; and Fig. 2 is a schematic block diagram illustrating the operation of an embodiment of the present invention also employing virtual memory space in the form of a hard disk. As will be appreciated, the present invention follows from the realisation that, in order to assist in minimisation of the power requirements within the device, the CPU should advantageously minimise the power requirements of the SDRAM devices during operation. At any particular point in time, the CPU is very unlikely to require access to all locations of all pages of the available memory although there will be a relatively small number of active processes and tasks being run and the remaining memory pages not involved in such processes and tasks will generally be dormant awaiting their next activation. On this basis, it is identified that only a particular set of the active pages will carry the majority of accesses required by the CPU to the memory over a particular period of time. The present invention seeks to effectively remap the active pages so that they are physically located within common devices so as to require the smallest possible space and therefore the smallest number of memory devices for the required operation. Once the active memory pages have been remapped in this manner certain of the plurality of memory devices, for example the SDRAM chips will not contain any such active pages and so, not requiring such, or indeed any, access by the CPU, those devices can be put into a self-refresh or even powered-down mode so as to reduce the power consumption of the memory arrangement. Since no active pages are now mapped to these devices, their powering-down will not have an adverse effect on functionality nor speed of the memory arrangement. Turning now. to Fig. 1, the above-mentioned concept of the present invention is illustrated further with regard to a memory arrangement 10 comprising four memory devices in the form of SDRAM chips 12, 14, 16 and 20. In accordance with the management arrangement of the present invention, the memory pages have been remapped such that the more frequently accessed pages, referred to herein as the active pages, are remapped to, as far as possible, common memory devices so as to limit the total number of devices over which the active pages are spread. For example active pages have been removed from the SDRAM chips
14, 16 and 18 to the SDRAM chip 12 which is almost completely full of active pages. On this basis, the SDRAM chip 12 is fully powered up and in operation. Since all active pages have been physically mapped into the SDRAM chip 12, no active pages therefore remain within the SDRAM chips 14, 16, and 18. An appropriate form of power-saving mode can therefore be applied to the these three devices 14, 16 and 18. In the illustrated example in Fig. 1 , the SDRAM chip 14 is full of used pages but, since these are not active, the device 14 can be placed in a self- refresh mode. Likewise, the SDRAM chip 16 is almost full with used pages but again, such used pages are not active, and the device 16 can likewise be placed in a self-refresh mode. As regards the SDRAM chip 18, no pages whatsoever located therein and so this memory device can be powered down completely. Thus, through the memory management arrangement embodying the present invention, the previous functionality derived from all active pages can be achieved through the above-mentioned remapping by merely having only one ,12, of the four memory devices 12-18 in a fully powered-up state. Through such dynamic memory management therefore, it is possible to further reduce power consumption without any detrimental effect on the performance of the memory arrangement. As will be evident from the above therefore, sufficient memory is still available for access albeit now located on a common memory device 12. At any time a process may start, stop or become active, and this may alter the list of frequently accessed data and the active page list should change to reflect this, for example, if a page in a powered-down chip is required, that chip is then powered-up for access. The chip is only powered down after a set time has elapsed with no accesses to it having been made. The CPU runs a background task, it is ideally the responsibility of this task to shift the data contained in the active pages such that they are packed into the smallest possible space. This requires data to be swapped between a page that has left the active page list with one that has been added. Alternatively, the active page list has to increase in size. The task updates the operating system's memory management maps and tables to reflect the change. A particular requirement of the invention is to provide an accurate list of currently active pages. The method adopted to achieve this is dependent on the capabilities of the CPU used in the system. In general the operating system will be extended such that it tracks the usage of the pages over a period of time so as to build a sorted list of the pages used in the system. In one arrangement the CPU can be extended in order to give definite details of page access. Such an extension would most likely occur in the Translation Look-aside Buffer (TLB). This could be extended such that each time an access through a particular entry is taken a counter associated with that entry is incremented. The manner in which the count can be read would generally depend on the CPU in question. Some CPU's need the operating system to load/unload TLB entries. In this case the counter can advantageously be read when an entry is unloaded and the page list updated accordingly. The operating system then also regularly probes all TLB entries to read, and reset, the counters for other TLB entries. This can prove important since a number of pages will be so popular they will not leave the TLB. As one alternative, the CPU is arranged with the capability to read the TLB entries. In this case there is no hardware counter required in the TLB. However, entries in the TLB are permitted to be read. In this alternative arrangement, the operating system would regularly read the TLB entries. This gives an indication of which pages are currently being accessed. Over a period of time this will also give a good statistical list of the most active pages. In another alternative, the CPU is arranged with the capability to read the cache tags. This involves a similar process to that outlined above. The main requirement here is that the current contents of the processors caches are considered to reflect the most important data at any one point. The processor can read the cache tags calculating the page from which the data originates from this information a list of the currently active pages is then built. In yet a further arrangement, the virtual memory of the system can be extended. While a virtual memory can store pages in a "swap space" on a hard disk, instead of swapping directly to and from the hard disk, an intermediary store is used. This intermediary storage device advantageously comprises RAM in self-refresh mode. When required the memory pages are swapped in and out of this intermediary RAM which offers very fast access. Each time access is required to this memory the RAM is powered up, but can then be replaced in to self-refresh after the access has been completed. If such a hard disk is present each page in the RAM swap space is tracked to see when it was last swapped into the active pages. Once the RAM swap space is full, pages that are very infrequently required can then be swapped out of the RAM swap space into the swap space on the hard disk. If such a page is required subsequently, it can be loaded from the hard disk into the active pages, whilst a page from the active pages is put into the RAM swap space and a page from the RAM swap space is in turn put onto the hard disk. This is described further with reference to Fig. 2 which illustrate a memory arrangement 20 comprising a set of SDRAM chips 22 containing active pages, a hard disk 24 and intermediary SDRAM chips 26 forming intermediary swap space 26. As noted, when the Active Pages 22 become full, pages begin to be placed in the RAM swap space. When both the active pages 22 and RAM swap space 26 are full, pages can then begin to be placed on the hard disk 24. When a page is required and it is found to be in the RAM swap space 26, the operating system is arranged to select a target page and the data can then be swapped. When a page is required and it is found to be in the hard disk 24 this data is loaded into the active pages 22 and the target page is then placed into the RAM swap space 26 thereby replacing a page which has been written into the hard disk page store. Such arrangements, and in particular the latter feature, allow the system to dynamically alter the power usage. This can be done by altering the size of the active page list. If, of course, the system is powered by a mains connection, all RAM can be switched on and used at full speed. When not connected to a mains supply, the system can advantageously adapt itself so that it uses less and less power as the battery drains.

Claims

1. A memory management method for use with a memory arrangement comprising a plurality of memory devices which provide for a plurality of memory locations for the storage of data to be accessed, the method including the step of identifying and moving more frequently accessed data to common devices of the said plurality of any devices so as to reduce the number of the said plurality of devices in which the said more frequently accessed data is stored, thereby allowing for the power-saving mode to be initiated at memory devices within the said plurality of memory devices not including the said more frequently accessed data.
2. A method as claimed in Claim 1, and including the step of remapping the locations containing the more frequently accessed data to the said common storage devices.
3. A method as claimed in Claim 1 or 2, wherein the power-saving mode comprises either self refresh mode or a powered-down mode.
4. A method as claimed in Claim 1 , 2 or 3 wherein the non-frequently accessed data is arranged to be stored on a hard disk storage device.
5. A method as claimed in any one or more of the preceding claims, in which a memory device to be placed into a power-saving mode is only placed in such mode after a predetermined time period has elapsed without any accesses being attempted to the device.
6. A method as claimed in any one or more of the preceding claims, including the steps of developing a list of current memory locations within which the more frequently accessed data is located.
7. A method as claimed in Claim 6, in which a CPU associated with the arrangement is extended in order to provide definite details of memory location access.
8. A method as claimed in Claim 7, in which the CPU extension occurs in the translation look-aside buffer.
9. A method as claimed in Claim 8, and including the step of incrementing a counter each time an access through the translation look-aside buffer entry is made.
10. A method as claimed in Claim 9, in which all translation look-aside buffer entries are read in order to read, and reset, counters associated with each of the translation look-aside buffer entries.
11. A method as claimed in Claim 9 or 10, and including the step of storing counter values back into the memory at the point of which a memory location is unloaded from the TLB and including the step of parsing the written-back . values in order to create the active page list.
12. A method as claimed in Claim 6, and including the step of reading the translation look-aside buffer entries within a CPU associated with the arrangement.
13. A method as claimed in Claim 12, wherein the operating system is arranged to read the translation look-aside buffer entries.
14. A method as claimed in Claim 6, and including the step of reading cache tags associated with the stored data and in order to identify the more frequently accessed data.
15. A method as claimed in Claim 6, and including the step of swapping non-frequently accessed data to a hard disk.
16. A method as claimed in Claim 15, and including the step of swapping the non-frequently used data to a hard disk by means of an intermediary temporary storage device.
17. A method as claimed in Claim 16, wherein stored data is swapped in and out of the hard disk either directly or via the intermediary temporary storage device depending upon the access history of the memory pages and the current access requirement of such pages.
18. A method as claimed in Claim 17, and including the step of retaining the intermediary storage device in a self-refresh mode.
19. A memory management system for use with a memory arrangement comprising a plurality of memory devices which provide for a plurality of memory locations for the storage of data to be accessed, the system including means for identifying more frequently access data stored over the said plurality of memory devices and for moving the said more frequently access data to common devices within the said plurality of memory devices so as to decrease the number of the said plurality of the devices in which more frequently accessed data is stored, and further including means for placing the plurality of memory devices that are then without the said more frequently accessed data into a power-saving mode.
20. A system as claimed in Claim 19, and arranged for remapping the locations containing the more frequently accessed data to the said common storage devices.
21. A system as claimed in Claim 19 or 20, wherein the power-saving mode comprises either self refresh mode or a powered-down mode.
22. A system as claimed in Claim 19, 20 or 21 , and arranged to develop a list of current memory locations within which the more frequently accessed data is located.
23. A system as claimed in Claim 22, in which a CPU associated with the arrangement is extended in order to provide definite details of memory location access.
24. A system as claimed in Claim 23, wherein the CPU is arranged such that the said extension is provided in the translation look-aside buffer.
25. A system as claimed in Claim 24 and including a counter arranged to be incremented each time an access through the translation look-aside buffer is made.
26. A system as claimed in Claim 25, in which all translation look-aside buffer entries are arranged to be read in order to read, and reset, counters associated with other translation look-aside buffer entries.
27. A system as claimed in Claim 24 or 25, and including means storing for the storage of counter values back into the memory at the point at which a memory location is unloaded and including means passing the written-back values in order to create the active page list.
28. A system as claimed in Claim 22, and including means for reading cache tags associated with the stored data and in order to identify the more frequently accessed data.
29. A system as claimed in any one or more of Claims 19 to 28, and including a hard disk arranged for receiving non-frequently accessed data.
30. A system as claimed in Claim 29, and including an intermediary temporary storage device.
31. A system as claimed in Claim 30, wherein the said intermediary storage device is arranged to operate in a self-refresh mode.
PCT/IB2005/050123 2004-01-13 2005-01-11 Memory management method and related system WO2005069148A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0400661.5 2004-01-13
GBGB0400661.5A GB0400661D0 (en) 2004-01-13 2004-01-13 Memory management method and related system

Publications (2)

Publication Number Publication Date
WO2005069148A2 true WO2005069148A2 (en) 2005-07-28
WO2005069148A3 WO2005069148A3 (en) 2006-02-23

Family

ID=31503818

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2005/050123 WO2005069148A2 (en) 2004-01-13 2005-01-11 Memory management method and related system

Country Status (2)

Country Link
GB (1) GB0400661D0 (en)
WO (1) WO2005069148A2 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006123140A1 (en) * 2005-05-18 2006-11-23 Symbian Software Limited Memory management in a computing device
WO2007072435A2 (en) * 2005-12-21 2007-06-28 Nxp B.V. Reducingthe number of memory banks being powered
GB2466264A (en) * 2008-12-17 2010-06-23 Symbian Software Ltd Memory defragmentation and compaction into high priority memory banks
US8041521B2 (en) 2007-11-28 2011-10-18 International Business Machines Corporation Estimating power consumption of computing components configured in a computing system
US8041976B2 (en) 2008-10-01 2011-10-18 International Business Machines Corporation Power management for clusters of computers
EP2026186A3 (en) * 2007-07-24 2011-12-07 Hitachi, Ltd. Storage controller and method for controlling the same
US8078695B2 (en) 2008-07-16 2011-12-13 Sony Corporation Media on demand using an intermediary device to output media from a remote computing device
US8103884B2 (en) 2008-06-25 2012-01-24 International Business Machines Corporation Managing power consumption of a computer
US8166326B2 (en) * 2007-11-08 2012-04-24 International Business Machines Corporation Managing power consumption in a computer
GB2497835A (en) * 2011-11-14 2013-06-26 Ibm Using hot and cold memory tiers to increase the memory capacity in power-constrained systems
US8514215B2 (en) 2008-11-12 2013-08-20 International Business Machines Corporation Dynamically managing power consumption of a computer with graphics adapter configurations
USRE46193E1 (en) 2005-05-16 2016-11-01 Texas Instruments Incorporated Distributed power control for controlling power consumption based on detected activity of logic blocks

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5630097A (en) * 1991-06-17 1997-05-13 Digital Equipment Corporation Enhanced cache operation with remapping of pages for optimizing data relocation from addresses causing cache misses
US20030023825A1 (en) * 2001-07-30 2003-01-30 Woo Steven C Consolidation of allocated memory to reduce power consumption
US20030051104A1 (en) * 2001-09-07 2003-03-13 Erik Riedel Technique for migrating data between storage devices for reduced power consumption
JP2003108317A (en) * 2001-09-27 2003-04-11 Fujitsu Ltd Storage system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5630097A (en) * 1991-06-17 1997-05-13 Digital Equipment Corporation Enhanced cache operation with remapping of pages for optimizing data relocation from addresses causing cache misses
US20030023825A1 (en) * 2001-07-30 2003-01-30 Woo Steven C Consolidation of allocated memory to reduce power consumption
US20030051104A1 (en) * 2001-09-07 2003-03-13 Erik Riedel Technique for migrating data between storage devices for reduced power consumption
JP2003108317A (en) * 2001-09-27 2003-04-11 Fujitsu Ltd Storage system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PATENT ABSTRACTS OF JAPAN vol. 2003, no. 08, 6 August 2003 (2003-08-06) & JP 2003 108317 A (FUJITSU LTD), 11 April 2003 (2003-04-11) *
V. DE LA LUZ, M. KANDEMIR AND I. KOLCU: "Automatic data migration for reducing energy consumption in multi-bank memory systems" PROCEEDINGS OF 39TH DESIGN AUTOMATION CONFERENCE 10-14 JUNE 2002 NEW ORLEANS, LA, USA, June 2002 (2002-06), pages 213-218, XP002340853 Proceedings 2002 Design Automation Conference (IEEE Cat. No.02CH37324) ACM New York, NY, USA ISBN: 1-58113-461-4 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE46193E1 (en) 2005-05-16 2016-11-01 Texas Instruments Incorporated Distributed power control for controlling power consumption based on detected activity of logic blocks
WO2006123140A1 (en) * 2005-05-18 2006-11-23 Symbian Software Limited Memory management in a computing device
WO2007072435A2 (en) * 2005-12-21 2007-06-28 Nxp B.V. Reducingthe number of memory banks being powered
WO2007072435A3 (en) * 2005-12-21 2007-11-01 Nxp Bv Reducingthe number of memory banks being powered
US8225036B2 (en) 2007-07-24 2012-07-17 Hitachi, Ltd. Storage controller and method for controlling the same
EP2026186A3 (en) * 2007-07-24 2011-12-07 Hitachi, Ltd. Storage controller and method for controlling the same
US8166326B2 (en) * 2007-11-08 2012-04-24 International Business Machines Corporation Managing power consumption in a computer
US8041521B2 (en) 2007-11-28 2011-10-18 International Business Machines Corporation Estimating power consumption of computing components configured in a computing system
US8103884B2 (en) 2008-06-25 2012-01-24 International Business Machines Corporation Managing power consumption of a computer
US8078695B2 (en) 2008-07-16 2011-12-13 Sony Corporation Media on demand using an intermediary device to output media from a remote computing device
US8041976B2 (en) 2008-10-01 2011-10-18 International Business Machines Corporation Power management for clusters of computers
US8514215B2 (en) 2008-11-12 2013-08-20 International Business Machines Corporation Dynamically managing power consumption of a computer with graphics adapter configurations
GB2466264A (en) * 2008-12-17 2010-06-23 Symbian Software Ltd Memory defragmentation and compaction into high priority memory banks
GB2497835A (en) * 2011-11-14 2013-06-26 Ibm Using hot and cold memory tiers to increase the memory capacity in power-constrained systems
GB2497835B (en) * 2011-11-14 2014-01-01 Ibm Increasing memory capacity in power-constrained systems
US8719527B2 (en) 2011-11-14 2014-05-06 International Business Machines Corporation Increasing memory capacity in power-constrained systems
US8738875B2 (en) 2011-11-14 2014-05-27 International Business Machines Corporation Increasing memory capacity in power-constrained systems

Also Published As

Publication number Publication date
GB0400661D0 (en) 2004-02-11
WO2005069148A3 (en) 2006-02-23

Similar Documents

Publication Publication Date Title
WO2005069148A2 (en) Memory management method and related system
KR100998389B1 (en) Dynamic memory sizing for power reduction
US8010764B2 (en) Method and system for decreasing power consumption in memory arrays having usage-driven power management
US9201608B2 (en) Memory controller mapping on-the-fly
US6038673A (en) Computer system with power management scheme for DRAM devices
US20180348851A1 (en) Report updated threshold level based on parameter
US7752470B2 (en) Method and system for power management including device controller-based device use evaluation and power-state control
US20080313482A1 (en) Power Partitioning Memory Banks
TWI321726B (en) Method of dynamically controlling cache size
US7821864B2 (en) Power management of memory via wake/sleep cycles
US7454639B2 (en) Various apparatuses and methods for reduced power states in system memory
US5632038A (en) Secondary cache system for portable computer
US20060181949A1 (en) Operating system-independent memory power management
US8788777B2 (en) Memory on-demand, managing power in memory
CN105630405B (en) A kind of storage system and the reading/writing method using the storage system
US11320890B2 (en) Power-conserving cache memory usage
GB2426360A (en) Reorganisation of memory for conserving power in a computing device
JPH04230508A (en) Apparatus and method for controlling electric power with page arrangment control
US20070006000A1 (en) Using fine-grained power management of physical system memory to improve system sleep
TWI224728B (en) Method and related apparatus for maintaining stored data of a dynamic random access memory
US8484418B2 (en) Methods and apparatuses for idle-prioritized memory ranks
JP3541349B2 (en) Cache memory backup system
US20140156941A1 (en) Tracking Non-Native Content in Caches

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase