DESCRIPTION
MEMORY MANAGEMENT METHOD AND RELATED SYSTEM The present invention relates to a memory management arrangement and related method for use in particular in assisting in the reduction of power consumption within portable electronic devices.
Currently available portable devices such as Personal Digital Assistance (PDAs), digital cameras and solid-state music players generally require a large amount of electronic storage. Such requirements are commonly met by the use of FLASH memory, a hard disk or Random Access Memory (RAM). RAM proves particularly advantageous in view of its increasing cost effectiveness and fast access times both for read and write access. In known devices employing RAM, the majority of the memory is provided in the form of SDRAM requiring a refresh process whereby the charge effectively stored as each bit within the SDRAM is reset to the required level for its state. Commonly, a SDRAM device has four different states, namely: on and in use - for example in which the CPU is sending or receiving commands or data; on, but not in use; in standby mode i.e. self-refresh mode; or powered down in which mode data is no longer stored in the chip. The standard states employed when the device has its CPU running is either on and in use, or on but not in use. The standby, or self-refresh, mode is provided so as to maintain data within the SDRAM when no accesses are required and in a manner exhibiting minimum power consumption. In particular, the CPU can switch the memory from one of its "on" modes into such a standby mode when it is known that no accesses will be required thereto.
In order to expand possible memory space, a plurality of SDRAM devices can be used and the number employed is commonly dictated by the width of the CPU bus. Known computer systems can contain a hard disk, or other large storage array, on which data can be stored. If the CPU has a memory management unit and is running an operating system with the relevant software these systems can implement virtual memory. This is where blocks of data, called pages, are stored to disk, once empty space in the systems RAM becomes sparse. Furthermore, in addition to switching the SDRAM devices into a self- refresh standby mode when power saving is required, known memory management systems also exhibit a suspend, or hibernate, mode within which the content of the memory, and data relating to the state of the CPU, is written out to a hard disk for temporary storage therein. Once written to the hard disk, the CPU and SDRAM devices can then be switched off so as to reduce power consumption although these elements are of course initialised with the data saved on the hard disk at a later stage when operation is again required. Such known systems therefore generally employ a memory which is tracked by the operating system of the CPU. The CPU employs a memory management unit to map the usage of small segments of memory and attribute these to the processes running on the CPU. The use of a hard disk in the manner noted above leads to the provision of a virtual memory for the CPU. However, while such procedures can serve to reduce the power consumption required within the associated device, such reduction in power consumption is achieved merely on the basis on the deactivation of portions of the device, i.e. when such portions are in a standby, or power-down mode. It would therefore be also advantageous to provide for reduced power consumption when SDRAM devices are required to be active. US-A-5860106 discloses a memory management arrangement in which memory access activity is monitored in an attempt to predict likely future activity and so thereby dynamically enable and disable components from the
memory sub-system on the basis of such predicted future requirements to thereby achieve power-saving. However, the manner in which this arrangement attempts to achieve a reduction in power consumption on an access by access basis, by altering the behaviour of the memory controller after each access and so the degree of available power-saving, is disadvantageously restricted.
The present invention seeks to provide for a memory management method and related apparatus having advantages over known such methods and apparatus.
According to a first aspect of the present invention there is provided a memory management method for use with a memory arrangement comprising a plurality of memory devices which provide for a plurality of memory locations for the storage of data to be accessed, the method including the step of identifying and moving more frequently accessed data to common devices of the said plurality of any devices so as to reduce the number of the said plurality of devices in which the said more frequently accessed data is stored, thereby allowing for the power-saving mode to be initiated at memory devices within the said plurality of memory devices not including the said more frequently accessed data. The invention is particularly advantageous in reducing the power required by a plurality of SDRAM devices while the said plurality can still offer the same functionality and performance to the CPU. Preferably the method includes the step of remapping the locations containing the more frequently accessed data to the said common storage devices Further, the method may also include the steps of developing a list of contiguous memory regions within which the more frequently accessed data is located. Preferably, the size of the contiguous memory region is identical to the page size used by the CPU and operating system. This allows the CPU to maintain an "active page list".
In particular, a CPU associated with the arrangement is extended in order to provide definite details of the location of memory accesses. Advantageously, the CPU extension occurs in the CPU's memory management unit, preferably the translation look-aside buffer. A counter is preferably incremented each time an access through the translation look-aside buffer entry is made. Preferably, all translation look-aside buffer entries are read in order to read, and reset, counters associated with each of the translation look-aside buffer entries. As a further feature, the counter values can be stored back into a table stored in memory at the instant at which a memory location is unloaded from the TLB. The step of parsing these written back values, by the CPU, to create the active page list is included. As an alternative, the method can include the step of periodically reading CPU cache tags associated with the stored data and in order to identify the said more frequently accessed data. In a virtual memory system we can extend the method of swapping non- frequently accessed data to a hard disk. The SDRAM in self-refresh mode can be seen as an intermediary temporary storage device, which has relatively fast access, but uses very little power. The stored data can then be swapped in and out of the hard disk either directly or via the intermediary temporary storage device depending upon the access history of the memory pages and the current access requirement of such pages. In controlling the number of the plurality of memory devices over which the more frequently accessed data is stored, the invention advantageously dynamically alters the amount of, for example, SDRAM immediately accessible at any particular time. Power-savings can therefore be achieved at a lower level than is known in the prior-art by placing the SDRAM devices into a low- power mode or switching them off should they be devoid of any of the more frequently accessed data. The memory control of the present invention can
therefore advantageously behave consistently, irrespective of the number of
SDRAM devices that remain active at any particular time. The grouping of the stored data is therefore advantageously determined on the basis of the frequency with which the data is accessed. According to another aspect of the present invention there is provided a memory management system for use with a memory arrangement comprising a plurality of memory devices which provide for a plurality of memory locations for the storage of data to be accessed, the system including means for identifying more frequently access data stored over the said plurality of memory devices and for moving the said more frequently access data to common devices within the said plurality of memory devices so as to decrease the number of the said plurality of the devices in which more frequently accessed data is stored, and further including means for placing the plurality of memory devices that are then without the said more frequently accessed data into a power-saving mode. The system can advantageously include features arranged to provide for the further steps noted above. It will therefore be appreciated that the present invention can provide for a processor that allows for its connect to a SDRAM device to be controlled such that different SDRAM devices within the memory arrangement can be placed into appropriate different power states dependent upon the activity level of the memory pages stored therein. Power reduction can therefore advantageously be achieved through reduction of the number of SDRAM devices that are required to be fully powered-up. In particular, all unused memory devices can be placed in an off state, while the most frequently required pages are grouped together and located in the fast powered-up memory devices. The operating system of the CPU is advantageously therefore arranged to track the particular usage of memory pages and provide for the appropriate re-mapping of the memory pages as and when required
The invention is described further hereinafter, by way of example only, with reference to the accompanying drawings in which: Fig. 1 is a schematic block diagram illustrating a plurality of memory devices controlled in accordance with the embodiment of the present invention; and Fig. 2 is a schematic block diagram illustrating the operation of an embodiment of the present invention also employing virtual memory space in the form of a hard disk. As will be appreciated, the present invention follows from the realisation that, in order to assist in minimisation of the power requirements within the device, the CPU should advantageously minimise the power requirements of the SDRAM devices during operation. At any particular point in time, the CPU is very unlikely to require access to all locations of all pages of the available memory although there will be a relatively small number of active processes and tasks being run and the remaining memory pages not involved in such processes and tasks will generally be dormant awaiting their next activation. On this basis, it is identified that only a particular set of the active pages will carry the majority of accesses required by the CPU to the memory over a particular period of time. The present invention seeks to effectively remap the active pages so that they are physically located within common devices so as to require the smallest possible space and therefore the smallest number of memory devices for the required operation. Once the active memory pages have been remapped in this manner certain of the plurality of memory devices, for example the SDRAM chips will not contain any such active pages and so, not requiring such, or indeed any, access by the CPU, those devices can be put into a self-refresh or even powered-down mode so as to reduce the power consumption of the memory arrangement. Since no active pages are now mapped to these devices, their powering-down will not have an adverse effect on functionality nor speed of the memory arrangement.
Turning now. to Fig. 1, the above-mentioned concept of the present invention is illustrated further with regard to a memory arrangement 10 comprising four memory devices in the form of SDRAM chips 12, 14, 16 and 20. In accordance with the management arrangement of the present invention, the memory pages have been remapped such that the more frequently accessed pages, referred to herein as the active pages, are remapped to, as far as possible, common memory devices so as to limit the total number of devices over which the active pages are spread. For example active pages have been removed from the SDRAM chips
14, 16 and 18 to the SDRAM chip 12 which is almost completely full of active pages. On this basis, the SDRAM chip 12 is fully powered up and in operation. Since all active pages have been physically mapped into the SDRAM chip 12, no active pages therefore remain within the SDRAM chips 14, 16, and 18. An appropriate form of power-saving mode can therefore be applied to the these three devices 14, 16 and 18. In the illustrated example in Fig. 1 , the SDRAM chip 14 is full of used pages but, since these are not active, the device 14 can be placed in a self- refresh mode. Likewise, the SDRAM chip 16 is almost full with used pages but again, such used pages are not active, and the device 16 can likewise be placed in a self-refresh mode. As regards the SDRAM chip 18, no pages whatsoever located therein and so this memory device can be powered down completely. Thus, through the memory management arrangement embodying the present invention, the previous functionality derived from all active pages can be achieved through the above-mentioned remapping by merely having only one ,12, of the four memory devices 12-18 in a fully powered-up state. Through such dynamic memory management therefore, it is possible to further reduce power consumption without any detrimental effect on the performance of the memory arrangement. As will be evident from the above therefore, sufficient memory is still available for access albeit now located on a common memory device 12.
At any time a process may start, stop or become active, and this may alter the list of frequently accessed data and the active page list should change to reflect this, for example, if a page in a powered-down chip is required, that chip is then powered-up for access. The chip is only powered down after a set time has elapsed with no accesses to it having been made. The CPU runs a background task, it is ideally the responsibility of this task to shift the data contained in the active pages such that they are packed into the smallest possible space. This requires data to be swapped between a page that has left the active page list with one that has been added. Alternatively, the active page list has to increase in size. The task updates the operating system's memory management maps and tables to reflect the change. A particular requirement of the invention is to provide an accurate list of currently active pages. The method adopted to achieve this is dependent on the capabilities of the CPU used in the system. In general the operating system will be extended such that it tracks the usage of the pages over a period of time so as to build a sorted list of the pages used in the system. In one arrangement the CPU can be extended in order to give definite details of page access. Such an extension would most likely occur in the Translation Look-aside Buffer (TLB). This could be extended such that each time an access through a particular entry is taken a counter associated with that entry is incremented. The manner in which the count can be read would generally depend on the CPU in question. Some CPU's need the operating system to load/unload TLB entries. In this case the counter can advantageously be read when an entry is unloaded and the page list updated accordingly. The operating system then also regularly probes all TLB entries to read, and reset, the counters for other TLB entries. This can prove important since a number of pages will be so popular they will not leave the TLB. As one alternative, the CPU is arranged with the capability to read the TLB entries. In this case there is no hardware counter required in the TLB. However, entries in the TLB are permitted to be read.
In this alternative arrangement, the operating system would regularly read the TLB entries. This gives an indication of which pages are currently being accessed. Over a period of time this will also give a good statistical list of the most active pages. In another alternative, the CPU is arranged with the capability to read the cache tags. This involves a similar process to that outlined above. The main requirement here is that the current contents of the processors caches are considered to reflect the most important data at any one point. The processor can read the cache tags calculating the page from which the data originates from this information a list of the currently active pages is then built. In yet a further arrangement, the virtual memory of the system can be extended. While a virtual memory can store pages in a "swap space" on a hard disk, instead of swapping directly to and from the hard disk, an intermediary store is used. This intermediary storage device advantageously comprises RAM in self-refresh mode. When required the memory pages are swapped in and out of this intermediary RAM which offers very fast access. Each time access is required to this memory the RAM is powered up, but can then be replaced in to self-refresh after the access has been completed. If such a hard disk is present each page in the RAM swap space is tracked to see when it was last swapped into the active pages. Once the RAM swap space is full, pages that are very infrequently required can then be swapped out of the RAM swap space into the swap space on the hard disk. If such a page is required subsequently, it can be loaded from the hard disk into the active pages, whilst a page from the active pages is put into the RAM swap space and a page from the RAM swap space is in turn put onto the hard disk. This is described further with reference to Fig. 2 which illustrate a memory arrangement 20 comprising a set of SDRAM chips 22 containing active pages, a hard disk 24 and intermediary SDRAM chips 26 forming intermediary swap space 26. As noted, when the Active Pages 22 become full, pages begin to be placed in the RAM swap space.
When both the active pages 22 and RAM swap space 26 are full, pages can then begin to be placed on the hard disk 24. When a page is required and it is found to be in the RAM swap space 26, the operating system is arranged to select a target page and the data can then be swapped. When a page is required and it is found to be in the hard disk 24 this data is loaded into the active pages 22 and the target page is then placed into the RAM swap space 26 thereby replacing a page which has been written into the hard disk page store. Such arrangements, and in particular the latter feature, allow the system to dynamically alter the power usage. This can be done by altering the size of the active page list. If, of course, the system is powered by a mains connection, all RAM can be switched on and used at full speed. When not connected to a mains supply, the system can advantageously adapt itself so that it uses less and less power as the battery drains.