WO2009144386A1 - Method and apparatus for storing software components in memory - Google Patents

Method and apparatus for storing software components in memory Download PDF

Info

Publication number
WO2009144386A1
WO2009144386A1 PCT/FI2009/050464 FI2009050464W WO2009144386A1 WO 2009144386 A1 WO2009144386 A1 WO 2009144386A1 FI 2009050464 W FI2009050464 W FI 2009050464W WO 2009144386 A1 WO2009144386 A1 WO 2009144386A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
ram
components
component
paged
Prior art date
Application number
PCT/FI2009/050464
Other languages
French (fr)
Inventor
Daniel Handley
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Publication of WO2009144386A1 publication Critical patent/WO2009144386A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
    • G06F12/1036Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] for multiple virtual address spaces, e.g. segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • G06F12/0638Combination of memories, e.g. ROM and RAM such as to permit replacement or supplementing of words in one module by words in another module
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44557Code layout in executable memory

Definitions

  • Embodiments of the present invention relate to a method and apparatus for storing software in memory.
  • the invention relates to determining which components of a device operating system are required to reside in particular areas of memory, and in particular to such a method which allows memory paging techniques to be used to reduce the amount of physical memory required in a device.
  • the invention also relates in some embodiments to a memory having contents determined by such a method.
  • Pages are predefined quantities of memory space, and they can act as a unit of memory size in the context of storing or loading code or data into memory locations.
  • Demand paging is a technique which involves loading pages of code or data into memory on demand, i.e. based on when they are required for a processing operation.
  • a method comprising :- determining if a software component is capable of being divided into memory pages for loading into and out of random access memory (RAM); storing the software component in a first part of a memory or a second part of the memory in dependence on the determination as to whether the component is capable of being divided into memory pages for loading into and out of RAM; wherein the first part of the memory is a part from which software components can be paged in pages from the memory into RAM for execution, and the second part of the memory is a part from which whole components are read into RAM for execution, without being paged.
  • RAM random access memory
  • the present invention provides apparatus comprising :- a processor; a memory having a first part and a second part; and random access memory (RAM); wherein the processor is arranged to cause the apparatus to:- i) determine if a software component is capable of being divided into memory pages for loading into and out of random access memory (RAM); and ii) store the software component in the first part of the memory or the second part of the memory in dependence on the determination as to whether the component is capable of being divided into memory pages for loading into and out of RAM; wherein the first part of the memory is a part from which software components can be paged in pages from the memory into RAM for execution, and the second part of the memory is a part from which whole components are read into RAM for execution, without being paged.
  • RAM random access memory
  • the invention provides a memory having a first part from which software components can be paged in pages from the memory into a RAM of a computing device for execution, and a second part from which whole components are read into RAM for execution, without being paged, wherein the memory has stored in the first part and the second software components which have been stored in the first part or the second part using the method and apparatus of any of the preceding claims.
  • the invention can provide apparatus comprising:- processor means; memory means having a first part and a second part; and random access memory (RAM) means; wherein the processor means is arranged to cause the apparatus to:- i) determine if a software component is capable of being divided into memory pages for loading into and out of random access memory (RAM) means; and ii) store the software component in the first part of the memory means or the second part of the memory means in dependence on the determination as to whether the component is capable of being divided into memory pages for loading into and out of the RAM means; wherein the first part of the memory means is a part from which software components can be paged in pages from the memory means into RAM means for execution, and the second part of the memory means is a part from which whole components are read into RAM means for execution, without being paged.
  • RAM random access memory
  • the processor means may include one or more separate processor cores.
  • the memory means may be provided in any suitable type of memory; one example is NAND Flash.
  • the RAM means may comprise any suitable type of memory that can be randomly accessed.
  • the invention may include a computer program, a suite of computer programs, a computer readable storage medium, or any software arrangement for implementing the method of the first example. Aspects of the invention may also be carried out in hardware, or in a combination of software and hardware.
  • Figure 1 is a block diagram of a smartphone architecture
  • Figure 2 is a diagram illustrating possible memory layouts
  • Figure 3 is a diagram illustrating how paged data can be paged into RAM
  • Figure 4 is a diagram illustrating a paging cache
  • Figure 5 is a diagram illustrating how a new page can be added to the paging cache
  • Figure 6 is a diagram illustrating how pages can be aged within a paging cache
  • Figure 7 is a diagram illustrating how aged pages can be rejuvenated in a paging cache
  • Figure 8 is a diagram illustrating how a page can be paged out of the paging cache
  • Figure 9 is a diagram illustrating the RAM savings obtained using demand paging
  • Figure 10 is a flow diagram illustrating a method according to a first embodiment of the present invention.
  • Figure 11 is a flow diagram illustrating a method performed in the first embodiment of the present invention.
  • Figure 12 is a flow diagram illustrating a method performed in a second embodiment of the present invention
  • Figure 13 is a flow diagram showing part of the method of Figure 12 in the second embodiment of the present invention
  • Figure 14 is a flow diagram illustrating another part of the method of Figure 12 in the second embodiment of the present invention.
  • Figure 15 is a diagram of a ROM which has had files allocated to it using an embodiment of the invention.
  • FIG. 1 shows an example of a device that may benefit from embodiments of the present invention.
  • the smartphone 10 comprises hardware to perform the telephony functions, together with an application processor and corresponding support hardware to enable the phone to have other functions which are desired by a smartphone, such as messaging, calendar, word processing functions and the like.
  • the telephony hardware is represented by the RF processor 102 which provides an RF signal to antenna 126 for the transmission of telephony signals, and the receipt therefrom.
  • baseband processor 104 which provides signals to and receives signals from the RF Processor 102.
  • the baseband processor 104 also interacts with a subscriber identity module 106.
  • a display 116 and a keypad 118. These are controlled by an application processor 108, which is often a separate integrated circuit from the baseband processor 104 and RF processor 102.
  • a power and audio controller 120 is provided to supply power from a battery to the telephony subsystem, the application processor, and the other hardware. Additionally, the power and audio controller 120 also controls input from a microphone 122, and audio output via a speaker 124.
  • the application processor 108 In order for the application processor 108 to operate, various different types of memory are often provided. Firstly, the application processor 108 is provided with some Random Access Memory (RAM) 112 into which data and program code can be written and read from at will. Code placed anywhere in RAM can be executed by the application processor 108 from the RAM.
  • RAM Random Access Memory
  • separate user memory 110 which is used to store user data, such as user application programs (typically higher layer application programs which determine the functionality of the device), as well as user data files, and the like.
  • user application programs typically higher layer application programs which determine the functionality of the device
  • user data files and the like.
  • Modern operating systems can be found on anything composed of integrated circuits, like personal computers, Internet servers, cell phones, music players, routers, switches, wireless access points, network storage, game consoles, digital cameras, DVD players, sewing machines, and telescopes.
  • An operating system is the software that manages the sharing of the resources of the device, and provides programmers with an interface to access those resources.
  • An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs on the system. At its most basic, the operating system performs tasks such as controlling and allocating memory, prioritising system requests, controlling input and output devices, facilitating networking, and managing files.
  • An operating system is in essence an interface by which higher level applications can access the hardware
  • an operating system is provided, which is started when the smartphone system 10 is first switched on.
  • the operating system code is commonly stored in a Read-Only Memory, and in modern devices, the Read-Only Memory is often NAND Flash ROM 114.
  • the ROM will store the necessary operating system component in order for the device 10 to operate, but other software programs may also be stored, such as application programs, and the like, and in particular those application programs which are mandatory to the device, such as, in the case of a smartphone, communications applications and the like. These would typically be the applications which are bundled with the smartphone by the device manufacturer when the phone is first sold. Further applications which are added to the smartphone by the user would usually be stored in the user memory 110.
  • ROM Read-Only Memory
  • ROM Read-Only Memory
  • XIP eXecute-In- Place
  • ROM situation is further complicated when the underlying media is not XIP. This is the case for NAND flash, used in many modern devices. Here code in NAND is copied (or shadowed) to RAM, where it can be executed in place. One way of achieving this is to copy the entire ROM contents into RAM during system boot and use the
  • MMU Memory Management Unit
  • ROM read only memory
  • RAM random access memory
  • layout A shows how the NAND flash 20 is structured in a simple example. All the ROM contents 22 are permanently resident in RAM and any executables in the user data area 24 (for example the C: or D: drive) are copied into RAM as they are needed.
  • the above method can be costly in terms of RAM usage, and a more efficient scheme can be used to split the ROM contents into those parts required to boot the OS, and everything else.
  • the former is placed in the Core image as before and the latter is placed into another area called the Read-Only File System (ROFS).
  • ROFS Read-Only File System
  • Code in ROFS is copied into RAM as it is needed at runtime, at the granularity of an executable (or other whole file), in the same way as executables in the user data area.
  • the component responsible for doing this is the 'Loader', which is part of the File Server process.
  • ROFS there are several ROFS images, for example localisation and/or operator-specific images.
  • the first one (called the primary ROFS) is combined with the Core image into a single ROM- like interface by what is known as the Composite File System.
  • Layout B in Figure 2 shows a Composite File System structure of another example.
  • ROM 30 is divided into the Core Image 32 comprising those components of the OS which will always be loaded into RAM, and the ROFS 34 containing those components which do not need to be continuously present in RAM, but which can be loaded in and out of RAM as required.
  • components in the ROFS 34 are loaded in and out of RAM as whole components when they are required (in the case of loading in) or not required. Comparing this to layout A, it can be seen that layout B is more RAM-efficient because some of the contents of the ROFS 34 are not copied into RAM at any given time. The more unused files there are in the ROFS 34, the greater the RAM saving.
  • Virtual memory techniques are known in the art, where the combined size of any programs, data and stack exceeds the physical memory available, but programs and data are split up into units called pages.
  • the pages which are required to be executed can be loaded into RAM, with the rest of the pages of the program and data stored in non XIP memory (such as on disk).
  • Demand paging refers to a form of paging where pages are loaded into memory on demand as they are needed, rather than in advance. Demand paging therefore generally relies on page faults occurring to trigger the loading of a page into RAM for execution.
  • An example embodiment of the invention to be described is based upon the smartphone architecture shown in Figure 1, and in particular a smartphone running Symbian OS.
  • Symbian OS the part of the operating system which is responsible overall for loading programs and data from non XIP memory into RAM is the "loader".
  • loader the part of the operating system which is responsible overall for loading programs and data from non XIP memory into RAM.
  • Many further details of the operation of the loader can be found in Sales J. Symbian OS Internals John Wiley & Sons, 2005, and in particular chapter 10 thereof, the entire contents of which are incorporated herein be reference.
  • the operation of the loader is modified to allow demand paging techniques to be used within the framework of Symbian OS.
  • a smartphone having a composite file system as previously described, wherein the CFS provides a Core Image comprising those components of the OS which will always be loaded into RAM, and the ROFS containing those components which do not need to be continuously present in RAM, but which can be loaded in and out of RAM as required.
  • the principles of virtual memory are used on the core image, to allow data and programs to be paged in and out of memory when required or not required. By using virtual memory techniques such as this, then RAM savings can be made, and overall hardware cost of a smartphone reduced.
  • XIP ROM Paging can refer to reading in required segments ("pages") of executable code into RAM as they are required, at a finer granularity than that of the entire executable. Typically, page size may be around 4kB; that is, code can be read in and out of RAM as required in 4kB chunks. A single executable may comprise a large number of pages. Paging is therefore very different from the operation of the ROFS, for example, wherein whole executables are read in and out of RAM as they are required to be run.
  • an XIP ROM image is split into two parts, one containing unpaged data and one containing data paged on demand.
  • the unpaged data is those executables and other data which cannot be split up into pages.
  • the unpaged data consists of kernel-side code plus those parts that should not be paged for other reasons (e.g. performance, robustness, power management, etc).
  • the terms 'locked down' or 'wired' can also be used to mean unpaged.
  • Paged data in this example is those executables and other data which can be split up into pages.
  • the unpaged area at the start of the XIP ROM image is loaded into RAM as normal but the linear address region normally occupied by the paged area is left unmapped - i.e. no RAM is allocated for it in this example.
  • a thread accesses memory in the paged area, it takes a page fault.
  • the page fault handler code in the kernel then allocates a page of RAM and reads the contents for this from the XIP ROM image contained on storage media (e.g. NAND flash).
  • a page is a convenient unit of memory allocation: in this example it is 4kB.
  • the thread then continues execution from the point where it took the page fault. This process is referred to in this example embodiment as 'paging in' and is described in more detail later.
  • the free RAM on the system reaches zero, memory allocation requests can be satisfied by taking RAM from the paged-in XIP ROM region. As RAM pages in the XIP ROM region are unloaded, they are 'paged out'.
  • Figure 3 shows the operations just described.
  • a page may contain data from one or more files and page boundaries do not necessarily coincide with file boundaries in the example embodiment.
  • layout C shows an XIP ROM paging structure according to the example embodiment.
  • ROM 40 comprises an unpaged core area 42 containing those components which should not be paged, and a paged core area 44 containing those components which should reside in the core image rather than the ROFS, but which can be paged.
  • ROFS 46 then contains those components which do not need to be in the Core image.
  • the unpaged area of the Core image may be larger than the total Core image in layout B, only a fraction of the contents of the paged area needs to be copied into RAM compared to the amount of loaded ROFS code in layout B.
  • Live Page A page of paged memory whose contents are currently available.
  • Dead Page A page of paged memory whose contents are not currently available.
  • Page In - The act of making a dead page into a live page.
  • Page Out The act of making a live page into a dead page.
  • the RAM used to store the content of this may then be reused for other purposes.
  • efficient performance of the paging subsystem is dependent on the algorithm that selects which pages are live at any given time, or conversely, which live pages should be made dead.
  • the paging subsystem of this embodiment approximates a Least Recently Used (LRU) algorithm for determining which pages to page out.
  • LRU Least Recently Used
  • the memory management unit 28 (MMU) provided in the example device is a component comprising hardware and software which has overall responsibility for the proper operation of the device memory, and in particular for allowing the application processor to write to or read from the memory.
  • the MMU is part of the paging subsystem of this example embodiment.
  • the paging algorithm provides a "live page list". All live pages are stored on the 'live page list', which is a part of the paging cache.
  • Figure 4 shows the live page list.
  • the live page list is split into two sub-lists, one containing young pages (the "young page list” 72) and the other, old pages (the "old page list” 74).
  • the memory management unit (MMU) 58 in the device of this example is used to make all young pages accessible to programs but the old pages inaccessible. However, the contents of old pages are preserved and they still count as being live.
  • the net effect is of a FIFO (first-in, first-out) list in front of an LRU list, which results in less page churn than a plain LRU.
  • FIFO first-in, first-out
  • Figure 5 shows what happens when a page is "paged in” in this example embodiment. When a page is paged in, it is added to the start of the young list 72 in the live page list, making it the youngest.
  • the paging subsystem of some embodiments attempts to keep the relative sizes of the two lists equal to a value called the young/old ratio. If this ratio is R, the number of young pages is Ny and the number of old pages is No then if (Ny > RNo ) , a page is taken from the end of the young list 72 and placed at the start of the old list 74. This process is called ageing, and is shown in Figure 6.
  • the operating system When the operating system requires more RAM for another purpose then it may obtain the memory used by a live page.
  • the 'oldest' live page is selected for paging out, turning it into a dead page, as shown in Figure 8. If paging out leaves too many young pages, according to the young/old ratio, then the last young page (e.g. Page D in Figure 8) would be aged. In this way, the young/old ratio helps to maintain the stability of the paging algorithm, and ensure that there are always some pages in the old list.
  • the above actions are executed in the context of the thread that tries to access the paged memory.
  • DP demand paging
  • Figure 2, layout B non-DP composite file system case
  • the performance overhead of paging can be outweighed by the performance gain of loading less code into RAM. This is sometimes known as 'lazy loading' of code.
  • non-DP case consists of a large core image (i.e. something closer to Figure 2, layout A)
  • most or all of the code involved in a use-case may already be permanently loaded, and so the performance improvement of lazy loading may be reduced.
  • An exception to this is during boot, where the cost of loading the whole core image into RAM contributes to the overall boot time.
  • a second possible performance improvement lies in improved stability of the device.
  • the stability of a device is often at its weakest in Out Of Memory (OOM) situations. Poorly written code may not cope well with exceptions caused by failed memory allocations. As a minimum, an OOM situation will degrade the user experience.
  • OOM Out Of Memory
  • the increased RAM saving makes it more difficult for the device to go OOM, avoiding many potential stability issues.
  • the RAM saving achieved by DP is proportional to the amount of code loaded in the non-DP case at a particular time. For instance, the RAM saving when 5 applications are running is greater than the saving immediately after boot. This can make it even harder to induce an OOM situation. Note that this increased stability may only apply when the entire device is OOM. Individual threads may have OOM problems due to reaching their own heap limits. DP may not help in these cases.
  • demand paging can introduce three new configurable parameters to the system. These are:
  • the first two are discussed below.
  • the third should be determined empirically.
  • a number of components are explicitly made unpaged in example embodiments of the invention, to meet the functional and performance requirements of a device.
  • the performance overhead of servicing a page fault is unbounded and variable so it may be desirable to protect some critical code paths by making files unpaged. Chains of files and their dependencies may need to be unpaged to achieve this. It may be possible to reduce the set of unpaged components by breaking unnecessary dependencies and separating critical code paths from non-critical ones.
  • a minimum paging cache size can be defined. If a system memory allocation would cause the paging cache to drop below the minimum size, then the allocation fails.
  • the paging cache grows but any RAM used by the cache above the minimum size does not contribute to the amount of used RAM reported by the system. Although this RAM is really being used, it will be recycled whenever anything else in the system requires the RAM. So the effective RAM usage of the paging cache is determined by its minimum size.
  • the minimum paging cache size relates to a minimum number of pages which should be in the paging cache at any one moment.
  • the pages in the paging cache are divided between the young list and the old list. This is not essential, however, and in other embodiments the paging cache may not be divided, or may be further sub divded into more than two lists. To help prevent thrashing, it is useful to maintain an overall minimum size of the list, and to make the pages therein accessible without having to be re-loaded into memory.
  • the effective RAM saving is the size of all paged components minus the minimum size of the paging cache. Note that when a ROFS section is introduced, this calculation is much more complicated because the contents of the ROFS are likely to be different between the non-DP and DP cases.
  • the RAM saving can be increased by reducing the set of unpaged components and/or reducing the minimum paging cache size (i.e. making the configuration more 'stressed'). Performance can be improved (up to a point) by increasing the set of unpaged components and/or increasing the minimum paging cache size (i.e. making the configuration more 'relaxed'). However, if the configuration is made too relaxed then it is possible to end up with a net RAM increase compared with a non-DP ROM.
  • the RAM savings made by placing the paged executable in the core image may be offset by the RAM loss of having its unpaged dependencies in the core ROM image as well.
  • This problem is referred to as the "core/ROFS split", and previously has been solved manually on a device by device basis.
  • core/ROFS split is time consuming, and does not in fact guarantee that an appropriate split is obtained that results in a RAM saving.
  • a different approach to determining the "core/ROFS split" i.e. which components should be included in the Core image and which in the ROFS, is desirable, which can help to ensure that the RAM saving benefits of demand paging can be obtained.
  • software components such as operating system components or other components can be marked as paged or unpaged by changing a flag in the header of the component (typically for executable components only) or adding a keyword to the instruction file that places files in ROM.
  • the default behaviour of unmarked executable components can also be specified. Unmarked non- executable components will always be paged.
  • all paged executables and their dependencies are placed in the core ROM image. Only unpaged dependencies that have no paged executables dependent on them are placed in the primary ROFS image. This example strategy does not attempt to limit the number of unpaged executables in the core ROM image.
  • some unpaged components may in practice be used (and hence loaded into RAM) all or most of the time, irrespective of whether they are placed in the core ROM image or the primary ROFS image. These unpaged components are collected into a 'privileged set' of components that are placed in the core ROM image. All other unpaged components are placed in the primary ROFS image in this embodiment.
  • the 'privileged set' may also contain unpaged executables that have a large number of paged executables dependent on them, where the cost of placing an unpaged executable in the core ROM image is outweighed by the benefit of having its paged dependencies in the core, resulting in a net RAM saving. Only those paged executables that have dependencies that are all paged or in the 'privileged set' are placed in the core ROM image. Other paged executables are placed in the primary ROFS image.
  • Determining the 'privileged set' is done in this example embodiment by creating an initial ROM with as many components in the primary ROFS as possible.
  • the use-case that is to be optimised for is then executed and the system is interrogated for which components are loaded during the use-case.
  • the 'privileged set' is the intersection of the list of loaded components and the list of unpaged components.
  • paged components which have all paged dependencies are placed in the core image, and unpaged components are placed in the ROFS, unless the component is to be in the core image, for example if it is a kernel component.
  • Paged components are also placed in the ROFS in this embodiment if any of their dependencies are unpaged, again unless the component is to be in the core image for some other reason, such as it being a kernel component.
  • Some embodiments of the present invention are directed towards providing a method and apparatus for deciding whether a particular software component to be loaded onto a device such as a smartphone or the like needs to be in the part of memory which is capable of being demand paged, or whether it should be in a different part of the memory which is not capable of being demand paged, such as, for example, in the case of the Symbian operating system, the Read-Only File System (ROFS).
  • a device such as a smartphone or the like needs to be in the part of memory which is capable of being demand paged, or whether it should be in a different part of the memory which is not capable of being demand paged, such as, for example, in the case of the Symbian operating system, the Read-Only File System (ROFS).
  • ROFS Read-Only File System
  • Figure 10 illustrates a flow diagram of one example embodiment of the present invention. However, blocks 10.2 and 10.4 of Figure 10 are common to all of the described embodiments, and will be described first.
  • a determination as to which software components need to be present in the Read-Only Memory which is being built is performed. This is a high level operation, and in this example involves compiling a list of all of the software components which are required to be installed onto the device. In the particular embodiment being described, we are concerned with which components of an operating system are to be installed on the device. It will be understood that block 10.2 may be performed by a device design team. Some devices may require only a subset of components of a particular operating system, whereas other devices may require more components, or a different subset of components. This will depend upon the device's purpose and its required functionality.
  • the second block in the method, 10.4 involves a determination for each software component to be installed as to whether the component is "paged" or "unpaged".
  • a "paged” component is a component which is capable of being paged i.e. the code of the component can be read into RAM from where it can be executed in small blocks known as memory pages. If a component is not capable of being split into pages for execution, then it is deemed to be "unpaged”. Here, in order to be executed, the component must typically be loaded whole into RAM, from where it can then be executed.
  • these embodiments of the invention can then be used to determine which components should be placed in that part of the memory which is capable of being paged into RAM, and which components should be placed into that other part of the memory from which components are read whole into RAM.
  • the part of the memory from which components can be paged is referred to as the core image
  • the Read-Only File System is referred to as the Read-Only File System (ROFS).
  • the core image and the ROFS together make up the composite file system (CFS).
  • a determination of the core/ROFS split for each component is performed dependent on the paged status of the components, and the dependent components. Further details of this process are shown in the flow diagram of Figure 11, discussed later.
  • the core ROM image which is built at block 10.8 is built to include the components determined to be in the core, and correspondingly the ROFS is also built to contain the components determined to be in the ROFS.
  • the core ROM image which is built at block 10.8 is stored in the NAND Flash, during the smartphone device manufacturing process.
  • blocks 10.2 to 10.8 should be performed during the device design process, whereas block 10.10 is performed during the device manufacturing process in this example.
  • the first block in the process of 10.6 is performed.
  • an evaluation is made as to whether the present component which is being evaluated is an executable component. If the component is not an executable, then processing proceeds to block 11.4, wherein the paged status of the component is evaluated.
  • This evaluation is performed by looking at the instruction file (the OBY file, in this example) which determines the how paged status reads. In the example, if the paged status is that the component is paged, then at block 11.6 the component is placed into the core ROM image. If the paged status is that the component is unpaged, then at block 11.18 the component is placed in the primary ROFS image.
  • the paged status of the component in the OBY file is examined. If there is no marking for this component, then processing proceeds to block 11.12. However, if the marking is such that the component is marked as paged, then processing proceeds to block 11.6, and the component is placed in the core ROM image. If the component marking is "unpaged”, then processing proceeds to a second evaluation process of block 11.16, wherein it is determined whether the component has any paged components dependent upon it. If the answer to this is positive i.e. the component does have paged components dependent upon it, then even though the component itself is unpaged, it is placed, at block 11.6, in the core ROM image.
  • the reason for this in the current example is to maintain the system criterion that a paged component which is present in the core ROM image should also have its dependencies present in the core ROM image, even if those dependencies are not themselves paged. If, at block 11.16 the component is determined not to have any paged components dependent upon it, then there is no need for the unpaged component to be placed in the core ROM image, and at block 11.18 it can instead be placed in the primary ROFS image.
  • processing proceeds to block 11.12, wherein the executable header is examined to determine if that contains a marking as to the paged status. If here there is a marking that the component is paged, then processing proceeds to block 11.16, wherein the component is placed in the core ROM image. If the marking is such that the component is unpaged, then processing proceeds to block 11.16, wherein an evaluation is performed as to whether the component has paged dependencies dependent upon it. If yes, then the component is placed in the core ROM image at block 11.6, for the same reasons as previously. If no, then the component is placed in the primary ROFS image, at block 11.18.
  • block 11.14 If the evaluation of block 11.12 of the example indicates that the executable header has no page markings, then processing proceeds to block 11.14.
  • the default behaviour for unmarked executables is followed. If the default behaviour for unmarked executables is to page the executable, then the component is placed in the core ROM image at block 11.6, otherwise, if the default behaviour is to have unmarked executables unpaged, then the evaluation of block 11.16 is again performed.
  • the component has paged components dependent upon it then the component is placed in the core ROM image, whereas if the component has no paged components dependent upon it then the component is placed in the primary ROFS image.
  • the above processing is performed in turn for every component, to determine whether the component should be placed in the core ROM image, or the primary ROFS image.
  • the contents of the core ROM image and the primary ROFS image have been obtained i.e. the ROM contents have been built.
  • the ROM contents output from this method can be loaded into the NAND Flash in the device.
  • paged components are placed in the core ROM image, together with their dependencies, whether the dependent components are paged or not. If a component is unpaged, and has no paged dependencies, then it is placed in the primary ROFS image.
  • the benefits of demand paging can be obtained for those components for which demand paging is suitable.
  • a RAM saving will be obtained by doing this, however whether a RAM saving is obtained in a particular case will depend upon the size of the unpaged dependent components which have also to be included in the core ROM image.
  • a RAM saving will be obtained using this example embodiment, unless any one of the unpaged dependent components is particularly large.
  • further processing is performed to determine whether or not unpaged components are in fact in a privileged set of components that should be in the core ROM image anyway.
  • the second example embodiment is based upon the realisation that some unpaged components may in practice be used (and hence loaded into RAM) all or most of the time, irrespective of whether they are placed in a core ROM image, or the primary ROFS image. If this is the case i.e. that such components are in any event loaded into RAM all or most of the time, then those components may as well be placed into the core ROM image.
  • Figure 13 shows the procedure to be performed to build the privileged set of unpaged components.
  • a ROM is created with most components (other than components for the kernel) placed in the ROFS, in a manner similar to that shown previously in layout A of Figure 2.
  • This ROM is then installed on a test device.
  • the device is booted, and a particular use scenario is run on the device. For example, in the case of a smartphone, the use scenario may be performing a call, sending an email, or the like.
  • the loading of the software components into RAM is monitored, and a list is compiled of which components are loaded into RAM during the use test.
  • the list can be examined, and a second list compiled of which components of those which were loaded into RAM were in fact unpaged components.
  • the unpaged components which were loaded into RAM during the use test are recorded as members of the privileged set.
  • the privileged set comprises a list of software component names, which are all unpaged components, but which were loaded into RAM during the use test scenario.
  • the privileged set can contain the names of those unpaged components which are loaded into RAM during several different uses of the device.
  • the device itself For example, an MP3 player which only plays stored MP3 files may not have any further uses.
  • an MP3 player which also has an in-built radio may have the additional radio use.
  • the core/ROFS split determination is performed for each component. This is performed in dependence on the paged status of the component itself as well as its dependencies, and also whether the dependent components are a member of the privileged set.
  • the core/ROFS split determination is repeated for each component which is to be installed on the device.
  • the core ROM image can be built, as well as the primary ROFS image.
  • the core ROM image and primary ROFS image obtained throughout block 12.8 and block 12.10 can be stored on the NAND Flash in the device.
  • Figure 14 shows the method performed during the core/ROFS split determination of block 12.8.
  • the procedure shown in Figure 14 is repeated for each component for which the core/ROFS split determination needs to be made.
  • a first evaluation is performed at block 14.2 as to whether the component is an executable component. If the component is not an executable component, then at block 14.4 the paged status of the component in the OBY file is examined in this example. If the status is that the component is "paged” then at block 14.6 the component is placed in the core ROM image, whereas otherwise if the status is "unpaged", then at block 14.14 the component is placed in the primary ROFS image.
  • processing proceeds to a second evaluation at block 14.8, wherein the default paging behaviour for executables is examined. If the default paging behaviour for executables is that all executables should be paged, then of course the component must be placed in the core ROM image, at block 14.6. However, if the default paging behaviour for executables is that executables should not be paged, then a component should be placed in the primary ROFS image, at block 14.14. If, however, there is no such default paging behaviour specified, then processing proceeds to block 14.10, wherein a second evaluation is performed on the particular paged or unpaged marking of the particular component.
  • the paged status of the particular component is examined in the OBY file. If the paged status is that the component is paged, then processing proceeds to a further evaluation, at block 14.12. This is an evaluation as to whether all of the components dependences are paged, or whether its dependences are in the privileged set. If this evaluation returns positive, then this means that not only is the component itself paged, but that its dependencies are paged, or are unpaged but are in the privileged set of unpaged components which will in any event be placed in the core ROM image. If this is the case, then the component is suitable for paging, together with its dependencies, and hence is placed in the core ROM image at block 14.6.
  • processing proceeds to the evaluation of block 14.20.
  • an evaluation is performed as to whether the component is listed in the privileged set of unpaged components, which in any event should be placed in the core ROM image to be loaded into RAM. If this is the case i.e. the component is in the privileged set, then processing proceeds to block 14.6, wherein the component is placed in the core ROM image. If this is not the case, i.e. the component is unpaged, and is not in the privileged set, then the component is placed in the primary ROFS image, at block 14.14.
  • processing proceeds to block 14.16, wherein the header of the executable is examined to determine whether there is a paged or unpaged marking in the header. If the executable header indicates that the component is a paged component, then processing proceeds to block 14.12, wherein the evaluation is performed as to whether all of the components dependencies are paged or whether the dependencies are in the privileged set. The reason for this is as described previously; a paged executable is only placed in the core ROM image if its dependencies will also be placed in the core ROM image i.e.
  • the default executable paging behaviour for the executable is examined. If this is that the executable should be paged, then processing proceeds to block 14.12, wherein the paged status of the component's dependencies, or whether the dependencies are in the privileged set, is examined. In the current example if all of the component's dependencies are either paged, or they are all in the privileged set, then the component itself can be placed in the core ROM image, at block 14.6. Conversely, if the component's dependencies are not paged or all in the privileged set, then the component is placed in the primary ROFS image, at block 14.14.
  • a second evaluation is performed at block 14.20 to determine whether the component is in the privileged set of unpaged components which in any event need to be placed in the core ROM image. If this is the case, then the component is placed in the core ROM image at block 14.6. If this is not the case, then the component is placed in the primary ROFS image, at block 14.14.
  • a ROM is obtained which contains those paged components whose dependencies are all either paged or in the privileged set in the core ROM image, and with other components in the primary ROFS image.
  • the core ROM image contains those components which are in any event almost always loaded into RAM in any case, together with those components which are capable of being paged.
  • the benefits of demand paging in terms of RAM savings can be obtained.
  • the privileged set was determined in dependence upon whether the unpaged components in the privileged set were in any event loaded into RAM during one or more test use cases. Thus, to determine the privileged set it was necessary to test the device using the use cases in advance.
  • the privileged set can be determined in a different way, and in particular based upon whether the components form part of the operating system kernel or not. If a component is a kernel component, then it is likely that it will almost always be loaded into RAM irrespective of the use case. Thus, a privileged set can be compiled dependent on whether the component is a kernel component. The same procedure as shown in Figure 14 can then be used, but with the different privileged set. This would result in paged components which have dependencies all of which are paged being placed in the core ROM image, but paged components which have unpaged dependencies which are not in the privileged set would be placed in the ROFS image. Unpaged components would automatically be placed in the ROFS image, unless they were kernel components, and hence in the privileged set.
  • a ROM image may be built which can be stored in NAND flash memory, which contains a core ROM image with those components which have been determined to be in the core ROM image so as to be suitable for demand paging, and a primary ROFS image containing those components which will not be demand paged.
  • Figure 15 shows such a ROM in accordance with an example embodiment, which is then stored in a device in NAND Flash. Because the core ROM image contains a large amount of paged data the XIP ROM image in RAM on the device is smaller, as indicated in Figure 3, and hence significant RAM savings can be made.
  • Embodiments of the present invention can provide an improved technique for determining which components of an operating system (or other software programs) should be included in an area of a memory which is capable of being paged into RAM, and which components should be included in an area of memory from which only whole components at a time are read into RAM. More particularly, embodiments of the invention can provide a technique which makes a decision as to whether a software component should be placed in the pageable area of the memory in dependence on whether the software component itself is capable of being divided into memory pages (i.e. whether the component is "paged"). In some embodiments, as well as looking at the software component itself, the dependencies of the component (i.e.
  • the other software components on which the first component relies for its operation are also examined to determine if they are capable of being divided into memory pages, and if they are so capable then the component and the dependencies are included in the pageable area of the memory. If the dependencies are not capable of being paged (i.e. are "unpaged"), then the component and the dependencies should not be included in the pageable area of the memory.
  • a "privileged set" of components is compiled of components which should be included in the pageable area in any event, even if the components themselves are not paged. The decision as to whether a particular component should be placed in the pageable area of the memory is then made in dependence on whether the component and its dependencies are paged, and also in dependence on whether the dependencies are in the privileged set.
  • the contents of a memory in terms of which software components should be stored in which part of the memory can be determined to help to ensure that the primary benefits of demand paging in terms of providing a RAM saving are obtained. Saving RAM in the device will typically lead to a reduction in the component cost of the device.
  • the software component may be stored in the first part of the memory if the component is capable of being divided into memory pages for loading into and out of RAM. This can help to ensure that paged components, which are capable of being subjected to demand paging, are stored in the part of the memory in which demand paging is performed, and hence benefits of demand paging can be obtained.
  • the software component is stored in the first part of the memory if the component is a dependency of another component which is capable of being divided into memory pages for loading into and out of RAM. This can help to ensure that unpaged dependencies of a paged component are also included in the part of the memory which is paged. If the above condition is not met, then the software component may be stored in the second part of the memory. This can avoid the first part of the memory becoming too large, hence allowing RAM savings to be made.
  • the software component is stored in the first part of the memory or the second part of the memory in further dependence on the determination as to whether other software components which are dependencies of the component are capable of being divided into memory pages for loading into and out of RAM.
  • the software component is stored in the first part of the memory if it is capable itself of being divided into memory pages for loading into and out of RAM and the other software components which are dependencies of the component are also all capable of being divided into memory pages for loading into and out of RAM. This can help to ensure that only paged components which can be subject to demand paging are placed in the first part of the memory, and hence RAM is not wasted in storing unpaged components which are there simply because they are a dependency of a paged component.
  • the software component is stored in the first part of the memory if it is capable itself of being divided into memory pages for loading into and out of RAM and the other software components which are dependencies of the component are members of a predetermined privileged set of components.
  • This example implementation recognises the fact that there are some unpaged components which are in any event stored in RAM almost all of the time. If these components are dependencies of a paged component, then that paged component should be included in the part of the memory which can be paged.
  • the software component is stored in the first part of the memory if it is a member of a predetermined privileged set of components.
  • the predetermined privileged set comprises those software components which during use of a computing device comprising the set of components are in any event loaded into RAM. This example embodiment recognises that if the component is in any event loaded into RAM during use then the component may as well be placed in the first part of the memory.
  • the components in the set are those components which are loaded into RAM during one or more test use cases of the device. This allows actual usage of the device to be used to optimise which components should be stored where.
  • the components in the set are those components which are loaded into RAM because they are kernel components of the computing device's operating system. These are components which need to be loaded to allow the device to operate.
  • the memory is of a type which is incapable of supporting eXecute-In-Place (XIP) operations, which is why the software components need to be loaded into RAM for execution.
  • XIP eXecute-In-Place
  • the memory is NAND Flash memory, which is used in many modern devices because it provides large memory capacity at relatively lower cost than other types of memory.
  • the techniques of the present invention may be used to provide embodiments with different applications, such as for example, as a general purpose computer, or as a portable media player, or other audio visual device, such as a camera.
  • Any device or machine which incorporates a computing device provided with RAM into which data and programs need to be loaded for execution may benefit from the invention and constitute an embodiment thereof.
  • the invention may therefore be applied in many fields, to provide improved devices or machines that require less RAM to operate than had heretofore been the case.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

Embodiments of the present invention provide an improved methodology for determining which components of an operating system (or other software programs) need to be included in an area of a memory which is capable of being paged into RAM, and which components should be included in an area of memory from which only whole components at a time are read into RAM. More particularly, embodiments of the invention provide a methodologywhich makes a decision as to whether a software component should be placed in the pageable area of the memory in dependence on whether the software component itself is capable of being divided into memory pages (i.e. whether the component is "paged"). In some embodiments, as well as looking at the software component itself, the dependencies of the component (i.e. the other software components on which the first component relies for its operation) are also examined to determine if they are capable of being divided into memory pages, and if they are so capable then the component and the dependencies are included in the pageable area of the memory. If the dependencies are not capable of being paged (i.e. are "unpaged"), then the component and the dependencies may not be included in the pageable area of the memory.

Description

Method and Apparatus for Storing Software Components in Memory
Technical Field
Embodiments of the present invention relate to a method and apparatus for storing software in memory. In example embodiments the invention relates to determining which components of a device operating system are required to reside in particular areas of memory, and in particular to such a method which allows memory paging techniques to be used to reduce the amount of physical memory required in a device. The invention also relates in some embodiments to a memory having contents determined by such a method.
Background to the Invention
The concept of memory pages is often employed in memory managements systems. Pages are predefined quantities of memory space, and they can act as a unit of memory size in the context of storing or loading code or data into memory locations. Demand paging is a technique which involves loading pages of code or data into memory on demand, i.e. based on when they are required for a processing operation.
Summary of the Invention
In a first example embodiment of the invention there is provided a method comprising :- determining if a software component is capable of being divided into memory pages for loading into and out of random access memory (RAM); storing the software component in a first part of a memory or a second part of the memory in dependence on the determination as to whether the component is capable of being divided into memory pages for loading into and out of RAM; wherein the first part of the memory is a part from which software components can be paged in pages from the memory into RAM for execution, and the second part of the memory is a part from which whole components are read into RAM for execution, without being paged. In a second example embodiment the present invention provides apparatus comprising :- a processor; a memory having a first part and a second part; and random access memory (RAM); wherein the processor is arranged to cause the apparatus to:- i) determine if a software component is capable of being divided into memory pages for loading into and out of random access memory (RAM); and ii) store the software component in the first part of the memory or the second part of the memory in dependence on the determination as to whether the component is capable of being divided into memory pages for loading into and out of RAM; wherein the first part of the memory is a part from which software components can be paged in pages from the memory into RAM for execution, and the second part of the memory is a part from which whole components are read into RAM for execution, without being paged.
In a third example, the invention provides a memory having a first part from which software components can be paged in pages from the memory into a RAM of a computing device for execution, and a second part from which whole components are read into RAM for execution, without being paged, wherein the memory has stored in the first part and the second software components which have been stored in the first part or the second part using the method and apparatus of any of the preceding claims.
In a further example, the invention can provide apparatus comprising:- processor means; memory means having a first part and a second part; and random access memory (RAM) means; wherein the processor means is arranged to cause the apparatus to:- i) determine if a software component is capable of being divided into memory pages for loading into and out of random access memory (RAM) means; and ii) store the software component in the first part of the memory means or the second part of the memory means in dependence on the determination as to whether the component is capable of being divided into memory pages for loading into and out of the RAM means; wherein the first part of the memory means is a part from which software components can be paged in pages from the memory means into RAM means for execution, and the second part of the memory means is a part from which whole components are read into RAM means for execution, without being paged.
The processor means may include one or more separate processor cores. The memory means may be provided in any suitable type of memory; one example is NAND Flash. The RAM means may comprise any suitable type of memory that can be randomly accessed.
In other examples, the invention may include a computer program, a suite of computer programs, a computer readable storage medium, or any software arrangement for implementing the method of the first example. Aspects of the invention may also be carried out in hardware, or in a combination of software and hardware.
Brief Description of the Drawings
Features and advantages of example embodiments of the present invention will become apparent from the following description with reference to the accompanying drawings, wherein: -
Figure 1 is a block diagram of a smartphone architecture;
Figure 2 is a diagram illustrating possible memory layouts;
Figure 3 is a diagram illustrating how paged data can be paged into RAM;
Figure 4 is a diagram illustrating a paging cache;
Figure 5 is a diagram illustrating how a new page can be added to the paging cache; Figure 6 is a diagram illustrating how pages can be aged within a paging cache;
Figure 7 is a diagram illustrating how aged pages can be rejuvenated in a paging cache;
Figure 8 is a diagram illustrating how a page can be paged out of the paging cache;
Figure 9 is a diagram illustrating the RAM savings obtained using demand paging;
Figure 10 is a flow diagram illustrating a method according to a first embodiment of the present invention;
Figure 11 is a flow diagram illustrating a method performed in the first embodiment of the present invention;
Figure 12 is a flow diagram illustrating a method performed in a second embodiment of the present invention; Figure 13 is a flow diagram showing part of the method of Figure 12 in the second embodiment of the present invention;
Figure 14 is a flow diagram illustrating another part of the method of Figure 12 in the second embodiment of the present invention; and Figure 15 is a diagram of a ROM which has had files allocated to it using an embodiment of the invention.
Detailed Description of Embodiments
Figure 1 shows an example of a device that may benefit from embodiments of the present invention. The smartphone 10 comprises hardware to perform the telephony functions, together with an application processor and corresponding support hardware to enable the phone to have other functions which are desired by a smartphone, such as messaging, calendar, word processing functions and the like. In Figure 1 the telephony hardware is represented by the RF processor 102 which provides an RF signal to antenna 126 for the transmission of telephony signals, and the receipt therefrom. Additionally provided is baseband processor 104, which provides signals to and receives signals from the RF Processor 102. The baseband processor 104 also interacts with a subscriber identity module 106.
Also provided are a display 116, and a keypad 118. These are controlled by an application processor 108, which is often a separate integrated circuit from the baseband processor 104 and RF processor 102. A power and audio controller 120 is provided to supply power from a battery to the telephony subsystem, the application processor, and the other hardware. Additionally, the power and audio controller 120 also controls input from a microphone 122, and audio output via a speaker 124.
In order for the application processor 108 to operate, various different types of memory are often provided. Firstly, the application processor 108 is provided with some Random Access Memory (RAM) 112 into which data and program code can be written and read from at will. Code placed anywhere in RAM can be executed by the application processor 108 from the RAM.
Additionally provided is separate user memory 110, which is used to store user data, such as user application programs (typically higher layer application programs which determine the functionality of the device), as well as user data files, and the like. Many modern electronic devices make use of operating systems. Modern operating systems can be found on anything composed of integrated circuits, like personal computers, Internet servers, cell phones, music players, routers, switches, wireless access points, network storage, game consoles, digital cameras, DVD players, sewing machines, and telescopes. An operating system is the software that manages the sharing of the resources of the device, and provides programmers with an interface to access those resources. An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs on the system. At its most basic, the operating system performs tasks such as controlling and allocating memory, prioritising system requests, controlling input and output devices, facilitating networking, and managing files. An operating system is in essence an interface by which higher level applications can access the hardware of the device.
In order for the application processor 108 to operate in the embodiment of Figure 1, an operating system is provided, which is started when the smartphone system 10 is first switched on. The operating system code is commonly stored in a Read-Only Memory, and in modern devices, the Read-Only Memory is often NAND Flash ROM 114. The ROM will store the necessary operating system component in order for the device 10 to operate, but other software programs may also be stored, such as application programs, and the like, and in particular those application programs which are mandatory to the device, such as, in the case of a smartphone, communications applications and the like. These would typically be the applications which are bundled with the smartphone by the device manufacturer when the phone is first sold. Further applications which are added to the smartphone by the user would usually be stored in the user memory 110.
ROM (Read-Only Memory) traditionally refers to memory devices that physically store data in a way which cannot be modified. These devices also allow direct random access to their contents and so code can be executed from them directly - code is eXecute-In- Place (XIP). This has the advantage that programs and data in ROM are always available and don't require any action to load them into memory. The term ROM can be used to mean 'data stored in such a way that it behaves like it is stored in read-only memory'. The underlying media may actually be physically writeable, like RAM or flash memory but the file system presents a ROM-like interface to the rest of the OS, for example as a particular drive.
The ROM situation is further complicated when the underlying media is not XIP. This is the case for NAND flash, used in many modern devices. Here code in NAND is copied (or shadowed) to RAM, where it can be executed in place. One way of achieving this is to copy the entire ROM contents into RAM during system boot and use the
Memory Management Unit (MMU) to mark this area of RAM with read-only permissions. The data stored by this method is called the Core ROM image (or just Core image) to distinguish it from other data stored in NAND. The Core image is an XIP
ROM and is usually the only one; it is permanently resident in RAM.
Figure 2, layout A shows how the NAND flash 20 is structured in a simple example. All the ROM contents 22 are permanently resident in RAM and any executables in the user data area 24 (for example the C: or D: drive) are copied into RAM as they are needed.
The above method can be costly in terms of RAM usage, and a more efficient scheme can be used to split the ROM contents into those parts required to boot the OS, and everything else. The former is placed in the Core image as before and the latter is placed into another area called the Read-Only File System (ROFS). Code in ROFS is copied into RAM as it is needed at runtime, at the granularity of an executable (or other whole file), in the same way as executables in the user data area. In a specific example of an embodiment using Symbian OS, the component responsible for doing this is the 'Loader', which is part of the File Server process.
In an example embodiment, there are several ROFS images, for example localisation and/or operator-specific images. Usually, the first one (called the primary ROFS) is combined with the Core image into a single ROM- like interface by what is known as the Composite File System.
Layout B in Figure 2 shows a Composite File System structure of another example. Here, ROM 30 is divided into the Core Image 32 comprising those components of the OS which will always be loaded into RAM, and the ROFS 34 containing those components which do not need to be continuously present in RAM, but which can be loaded in and out of RAM as required. As mentioned, components in the ROFS 34 are loaded in and out of RAM as whole components when they are required (in the case of loading in) or not required. Comparing this to layout A, it can be seen that layout B is more RAM-efficient because some of the contents of the ROFS 34 are not copied into RAM at any given time. The more unused files there are in the ROFS 34, the greater the RAM saving.
It would, however, be beneficial if even further RAM savings could be made. Virtual memory techniques are known in the art, where the combined size of any programs, data and stack exceeds the physical memory available, but programs and data are split up into units called pages. The pages which are required to be executed can be loaded into RAM, with the rest of the pages of the program and data stored in non XIP memory (such as on disk). Demand paging refers to a form of paging where pages are loaded into memory on demand as they are needed, rather than in advance. Demand paging therefore generally relies on page faults occurring to trigger the loading of a page into RAM for execution.
An example embodiment of the invention to be described is based upon the smartphone architecture shown in Figure 1, and in particular a smartphone running Symbian OS. Within Symbian OS, as mentioned, the part of the operating system which is responsible overall for loading programs and data from non XIP memory into RAM is the "loader". Many further details of the operation of the loader can be found in Sales J. Symbian OS Internals John Wiley & Sons, 2005, and in particular chapter 10 thereof, the entire contents of which are incorporated herein be reference. Within the example embodiment to be described the operation of the loader is modified to allow demand paging techniques to be used within the framework of Symbian OS.
In particular, according to the example embodiment, a smartphone is provided having a composite file system as previously described, wherein the CFS provides a Core Image comprising those components of the OS which will always be loaded into RAM, and the ROFS containing those components which do not need to be continuously present in RAM, but which can be loaded in and out of RAM as required. In order to reduce the RAM requirement of the smartphone, within the example embodiment the principles of virtual memory are used on the core image, to allow data and programs to be paged in and out of memory when required or not required. By using virtual memory techniques such as this, then RAM savings can be made, and overall hardware cost of a smartphone reduced.
Since an XIP ROM image on NAND is actually stored in RAM, an opportunity arises to demand page the contents of the XIP ROM, that is, read its data contents from NAND flash into RAM (where it can be executed), on demand. This is called XIP ROM Paging (or demand paging). "P aging" can refer to reading in required segments ("pages") of executable code into RAM as they are required, at a finer granularity than that of the entire executable. Typically, page size may be around 4kB; that is, code can be read in and out of RAM as required in 4kB chunks. A single executable may comprise a large number of pages. Paging is therefore very different from the operation of the ROFS, for example, wherein whole executables are read in and out of RAM as they are required to be run.
In the example embodiment of the invention an XIP ROM image is split into two parts, one containing unpaged data and one containing data paged on demand. In this example the unpaged data is those executables and other data which cannot be split up into pages. The unpaged data consists of kernel-side code plus those parts that should not be paged for other reasons (e.g. performance, robustness, power management, etc). The terms 'locked down' or 'wired' can also be used to mean unpaged. Paged data in this example is those executables and other data which can be split up into pages.
At boot time, the unpaged area at the start of the XIP ROM image is loaded into RAM as normal but the linear address region normally occupied by the paged area is left unmapped - i.e. no RAM is allocated for it in this example.
When a thread accesses memory in the paged area, it takes a page fault. The page fault handler code in the kernel then allocates a page of RAM and reads the contents for this from the XIP ROM image contained on storage media (e.g. NAND flash). As mentioned, a page is a convenient unit of memory allocation: in this example it is 4kB. The thread then continues execution from the point where it took the page fault. This process is referred to in this example embodiment as 'paging in' and is described in more detail later. When the free RAM on the system reaches zero, memory allocation requests can be satisfied by taking RAM from the paged-in XIP ROM region. As RAM pages in the XIP ROM region are unloaded, they are 'paged out'. Figure 3 shows the operations just described.
Note that the content in the paged data area of an XIP ROM is subject to paging in this example, not just executable code; accessing any file in this area may induce a page fault. A page may contain data from one or more files and page boundaries do not necessarily coincide with file boundaries in the example embodiment.
Figure 2, layout C shows an XIP ROM paging structure according to the example embodiment. Here, ROM 40 comprises an unpaged core area 42 containing those components which should not be paged, and a paged core area 44 containing those components which should reside in the core image rather than the ROFS, but which can be paged. ROFS 46 then contains those components which do not need to be in the Core image. Although the unpaged area of the Core image may be larger than the total Core image in layout B, only a fraction of the contents of the paged area needs to be copied into RAM compared to the amount of loaded ROFS code in layout B.
Further details of the algorithm which controls demand paging in this example embodiment will now be described. All memory content that can be demand paged is said in this example to be 'paged memory' and the process is controlled by the 'paging subsystem'. Other terms that are used in describing example embodiments of the invention are:
1. Live Page - A page of paged memory whose contents are currently available.
2. Dead Page - A page of paged memory whose contents are not currently available.
3. Page In - The act of making a dead page into a live page. 4. Page Out - The act of making a live page into a dead page. The RAM used to store the content of this may then be reused for other purposes. In one embodiment, efficient performance of the paging subsystem is dependent on the algorithm that selects which pages are live at any given time, or conversely, which live pages should be made dead. The paging subsystem of this embodiment approximates a Least Recently Used (LRU) algorithm for determining which pages to page out. The memory management unit 28 (MMU) provided in the example device is a component comprising hardware and software which has overall responsibility for the proper operation of the device memory, and in particular for allowing the application processor to write to or read from the memory. The MMU is part of the paging subsystem of this example embodiment.
The paging algorithm according to one embodiment provides a "live page list". All live pages are stored on the 'live page list', which is a part of the paging cache. Figure 4 shows the live page list. The live page list is split into two sub-lists, one containing young pages (the "young page list" 72) and the other, old pages (the "old page list" 74). The memory management unit (MMU) 58 in the device of this example is used to make all young pages accessible to programs but the old pages inaccessible. However, the contents of old pages are preserved and they still count as being live. The net effect is of a FIFO (first-in, first-out) list in front of an LRU list, which results in less page churn than a plain LRU.
Figure 5 shows what happens when a page is "paged in" in this example embodiment. When a page is paged in, it is added to the start of the young list 72 in the live page list, making it the youngest.
The paging subsystem of some embodiments attempts to keep the relative sizes of the two lists equal to a value called the young/old ratio. If this ratio is R, the number of young pages is Ny and the number of old pages is No then if (Ny > RNo ) , a page is taken from the end of the young list 72 and placed at the start of the old list 74. This process is called ageing, and is shown in Figure 6.
If an old page is accessed by a program in an example embodiment, this causes a page fault because the MMU has marked old pages as inaccessible. The paging subsystem then turns that page into a young page (i.e. rejuvenates it), and at the same time turns the last young page into an old page. This is shown in Figure 7, wherein the old page to be accessed is taken from the old list 74 and added to the young list 72, and the last (oldest) young page is aged from the young list 72 to the old list 74.
When the operating system requires more RAM for another purpose then it may obtain the memory used by a live page. In one example the 'oldest' live page is selected for paging out, turning it into a dead page, as shown in Figure 8. If paging out leaves too many young pages, according to the young/old ratio, then the last young page (e.g. Page D in Figure 8) would be aged. In this way, the young/old ratio helps to maintain the stability of the paging algorithm, and ensure that there are always some pages in the old list.
When a program attempts to access paged memory that is 'dead', a page fault is generated by the MMU and the executing thread is diverted to the Symbian OS exception handler. This performs the following tasks:
1. Obtain a page of RAM from the system's pool of unused RAM (i.e. the 'free pool'), or if this is empty, page out the oldest live page and use that instead.
2. Read the contents for this page from some media (e.g. NAND flash).
3. Update the paging cache's live list as described previously. 4. Use the MMU to make this RAM page accessible at the correct linear address.
5. Resume execution of the program's instructions, starting with the one that caused the initial page fault.
In some embodiments the above actions are executed in the context of the thread that tries to access the paged memory.
When the system requires more RAM and the free pool is empty then RAM that is being used to store paged memory is freed up for use. This is referred to as 'paging out' and happens by the following process:
1. Remove the 'oldest' RAM page from the paging cache. 2. Use the MMU to mark the page as inaccessible.
3. Return the RAM page to the free pool. Possible benefits of the demand paging algorithm of some embodiments of the invention will now be discussed. In general, a purpose of demand paging is to save RAM, but there may also be at least two other potential benefits. These benefits can be dependent on a paging configuration, discussed later.
One possible performance benefit resulting from some embodiments of the invention is due to so-called "lazy loading". In general, the cost of servicing a page fault means that paging has a negative impact on performance. However, in some cases demand paging (DP) actually improves performance compared with the non-DP composite file system case (Figure 2, layout B), especially when the use-case normally involves loading a large amount of code into RAM (e.g. when booting or starting up large applications). In these cases, the performance overhead of paging can be outweighed by the performance gain of loading less code into RAM. This is sometimes known as 'lazy loading' of code.
Note that when the non-DP case consists of a large core image (i.e. something closer to Figure 2, layout A), most or all of the code involved in a use-case may already be permanently loaded, and so the performance improvement of lazy loading may be reduced. An exception to this is during boot, where the cost of loading the whole core image into RAM contributes to the overall boot time.
A second possible performance improvement lies in improved stability of the device. The stability of a device is often at its weakest in Out Of Memory (OOM) situations. Poorly written code may not cope well with exceptions caused by failed memory allocations. As a minimum, an OOM situation will degrade the user experience.
If DP is enabled on a device and the same physical RAM is available compared with the non-DP case, the increased RAM saving makes it more difficult for the device to go OOM, avoiding many potential stability issues. Furthermore, the RAM saving achieved by DP is proportional to the amount of code loaded in the non-DP case at a particular time. For instance, the RAM saving when 5 applications are running is greater than the saving immediately after boot. This can make it even harder to induce an OOM situation. Note that this increased stability may only apply when the entire device is OOM. Individual threads may have OOM problems due to reaching their own heap limits. DP may not help in these cases.
In addition to the above described benefits of demand paging, further performance improvements may be obtained in dependence on the demand paging configuration. In particular, demand paging can introduce three new configurable parameters to the system. These are:
1. The amount of code and data that is marked as unpaged.
2. The minimum size of the paging cache.
3. The ratio of young pages to old pages in the paging cache.
The first two are discussed below. The third should be determined empirically.
With respect to the amount of unpaged files, it is preferred in some embodiments that areas of the OS involved in servicing a paging fault are protected from blocking on the thread that took the paging fault (directly or indirectly). Otherwise, a deadlock situation may occur. This is partly achieved in Symbian OS by ensuring that all kernel-side components are always unpaged.
In addition to kernel-side components, a number of components are explicitly made unpaged in example embodiments of the invention, to meet the functional and performance requirements of a device. The performance overhead of servicing a page fault is unbounded and variable so it may be desirable to protect some critical code paths by making files unpaged. Chains of files and their dependencies may need to be unpaged to achieve this. It may be possible to reduce the set of unpaged components by breaking unnecessary dependencies and separating critical code paths from non-critical ones.
Whilst making a component unpaged is a straightforward performance/RAM trade-off, this can be made configurable, allowing the device manufacturer in embodiments of the invention to make the decision based on their system requirements. With respect to the paging cache size, as described previously if the system requires more free RAM and the free RAM pool is empty, then pages are removed from the paging cache in order to service the memory allocation. In some embodiments this cannot continue indefinitely or a situation may arise where the same pages are continually paged in and out of the paging cache; this is known as page thrashing. Performance is dramatically reduced in this situation.
To avoid catastrophic performance loss due to thrashing, within some embodiments a minimum paging cache size can be defined. If a system memory allocation would cause the paging cache to drop below the minimum size, then the allocation fails.
As paged data is paged in, the paging cache grows but any RAM used by the cache above the minimum size does not contribute to the amount of used RAM reported by the system. Although this RAM is really being used, it will be recycled whenever anything else in the system requires the RAM. So the effective RAM usage of the paging cache is determined by its minimum size.
In theory, it is also possible to limit the maximum paging cache size. However, this may not be useful in production devices because it prevents the paging cache from using all the otherwise unused RAM in the system. This may negatively impact performance for no effective RAM saving.
By setting a minimum paging cache size, thrashing can be prevented in some embodiments of the invention. In this respect, the minimum paging cache size relates to a minimum number of pages which should be in the paging cache at any one moment.
In one embodiment the pages in the paging cache are divided between the young list and the old list. This is not essential, however, and in other embodiments the paging cache may not be divided, or may be further sub divded into more than two lists. To help prevent thrashing, it is useful to maintain an overall minimum size of the list, and to make the pages therein accessible without having to be re-loaded into memory.
Overall the main advantage of using DP is the RAM saving which is obtained. An easy way to visualise the RAM saving achieved by DP is to compare simple configurations. Consider a non-DP ROM consisting of a Core with no ROFS (as in Figure 2, layout A). Compare that with a DP ROM consisting of an XIP ROM paged Core image, again with no ROFS (similar to Figure 2, layout C but without the ROFS). The total ROM contents are the same in both cases. Here the effective RAM saving is depicted by Figure 9.
The effective RAM saving is the size of all paged components minus the minimum size of the paging cache. Note that when a ROFS section is introduced, this calculation is much more complicated because the contents of the ROFS are likely to be different between the non-DP and DP cases.
The RAM saving can be increased by reducing the set of unpaged components and/or reducing the minimum paging cache size (i.e. making the configuration more 'stressed'). Performance can be improved (up to a point) by increasing the set of unpaged components and/or increasing the minimum paging cache size (i.e. making the configuration more 'relaxed'). However, if the configuration is made too relaxed then it is possible to end up with a net RAM increase compared with a non-DP ROM.
Demand paging is therefore able to present significant advantages in terms of RAM savings, and hence providing an attendant reduction in the manufacturing cost of a device. Additionally, as mentioned above, depending on configuration performance improvements can also be obtained.
However, when actually implementing demand paging on a device, a complication can arise in terms of selecting which OS components should actually be included in that part of the ROM (the Core image) which is subject to demand paging, rather than being included in the ROFS. If this selection is not performed correctly, then no RAM savings may in fact be achieved. In such a case, dependent on the DP configuration, it may be that in fact performance overheads are being incurred in the form of page faults with no concurrent benefit in the form of a reduced RAM requirement.
More particularly, one might have thought that given that demand paging can only operate on paged components, then all paged components should be placed in the Core image (where they can then be demand paged), and all unpaged components placed in the ROFS (or other file system). However, it is often not possible to simply place all paged components in the core ROM image and all unpaged components in the primary ROFS because there is a restriction that all static dependencies (such as, for example, DLL functions, other executables, etc) of executable components in a core ROM image should also be present in that image, whether they are paged or unpaged. If a paged executable has a number of unpaged dependencies, then the RAM savings made by placing the paged executable in the core image may be offset by the RAM loss of having its unpaged dependencies in the core ROM image as well. This problem is referred to as the "core/ROFS split", and previously has been solved manually on a device by device basis. However, such an approach is time consuming, and does not in fact guarantee that an appropriate split is obtained that results in a RAM saving. A different approach to determining the "core/ROFS split" i.e. which components should be included in the Core image and which in the ROFS, is desirable, which can help to ensure that the RAM saving benefits of demand paging can be obtained.
In example embodiments of the invention software components such as operating system components or other components can be marked as paged or unpaged by changing a flag in the header of the component (typically for executable components only) or adding a keyword to the instruction file that places files in ROM. The default behaviour of unmarked executable components can also be specified. Unmarked non- executable components will always be paged.
There may be a complication if a paged component has a number of unpaged dependencies, as the RAM savings made by placing the paged executable in the core image may be offset by the RAM loss of having its unpaged dependencies in the core ROM image as well. Embodiments of the present invention present new strategies for handling this complication.
In one embodiment, all paged executables and their dependencies (whether they are paged or unpaged) are placed in the core ROM image. Only unpaged dependencies that have no paged executables dependent on them are placed in the primary ROFS image. This example strategy does not attempt to limit the number of unpaged executables in the core ROM image. In another embodiment some unpaged components may in practice be used (and hence loaded into RAM) all or most of the time, irrespective of whether they are placed in the core ROM image or the primary ROFS image. These unpaged components are collected into a 'privileged set' of components that are placed in the core ROM image. All other unpaged components are placed in the primary ROFS image in this embodiment. The 'privileged set' may also contain unpaged executables that have a large number of paged executables dependent on them, where the cost of placing an unpaged executable in the core ROM image is outweighed by the benefit of having its paged dependencies in the core, resulting in a net RAM saving. Only those paged executables that have dependencies that are all paged or in the 'privileged set' are placed in the core ROM image. Other paged executables are placed in the primary ROFS image.
Determining the 'privileged set' is done in this example embodiment by creating an initial ROM with as many components in the primary ROFS as possible. The use-case that is to be optimised for is then executed and the system is interrogated for which components are loaded during the use-case. The 'privileged set' is the intersection of the list of loaded components and the list of unpaged components.
In a further embodiment an intermediate approach is adopted. Here paged components which have all paged dependencies are placed in the core image, and unpaged components are placed in the ROFS, unless the component is to be in the core image, for example if it is a kernel component. Paged components are also placed in the ROFS in this embodiment if any of their dependencies are unpaged, again unless the component is to be in the core image for some other reason, such as it being a kernel component.
Several specific embodiments of the present invention will now be described below by way of example, but prior to this, some aspects which are common to several of the embodiments will first be described.
Some embodiments of the present invention are directed towards providing a method and apparatus for deciding whether a particular software component to be loaded onto a device such as a smartphone or the like needs to be in the part of memory which is capable of being demand paged, or whether it should be in a different part of the memory which is not capable of being demand paged, such as, for example, in the case of the Symbian operating system, the Read-Only File System (ROFS). It should be noted that these embodiments of the present invention are particularly suitable for determining the memory contents in the Symbian operating system where the split is between the core image and the ROFS, as described previously, but in other embodiments of the present invention covered by the appended claims, the software components may be components of any software system, and the invention is not limited to use with operating system software solely, or the Symbian operating system software in particular.
With reference to Figure 10, Figure 10 illustrates a flow diagram of one example embodiment of the present invention. However, blocks 10.2 and 10.4 of Figure 10 are common to all of the described embodiments, and will be described first.
At block 10.2 a determination as to which software components need to be present in the Read-Only Memory which is being built is performed. This is a high level operation, and in this example involves compiling a list of all of the software components which are required to be installed onto the device. In the particular embodiment being described, we are concerned with which components of an operating system are to be installed on the device. It will be understood that block 10.2 may be performed by a device design team. Some devices may require only a subset of components of a particular operating system, whereas other devices may require more components, or a different subset of components. This will depend upon the device's purpose and its required functionality.
The second block in the method, 10.4, involves a determination for each software component to be installed as to whether the component is "paged" or "unpaged". A "paged" component is a component which is capable of being paged i.e. the code of the component can be read into RAM from where it can be executed in small blocks known as memory pages. If a component is not capable of being split into pages for execution, then it is deemed to be "unpaged". Here, in order to be executed, the component must typically be loaded whole into RAM, from where it can then be executed. The determination as to whether a particular component is capable of being paged, i.e. is "paged", or should not be paged i.e. must be loaded whole into RAM and therefore is "unpaged", depends on the details of the particular component, such as its purpose and function. Typically, lower level components, such as kernel components may not be capable of being paged. Often, some analysis and testing is required to determine whether a particular software component is capable of being paged, and this analysis can be either static analysis wherein properties of the component are analysed, or dynamic analysis making use of test cases of the device, and experimenting with the component in either paged or unpaged form.
Once it has been determined which software components are to be installed onto the device, and whether the components are paged or unpaged, these embodiments of the invention can then be used to determine which components should be placed in that part of the memory which is capable of being paged into RAM, and which components should be placed into that other part of the memory from which components are read whole into RAM. In an example using Symbian OS, the part of the memory from which components can be paged is referred to as the core image, whereas that part of the memory from which components are read whole is referred to as the Read-Only File System (ROFS). The core image and the ROFS together make up the composite file system (CFS).
With reference to the example embodiment shown in Figure 10, at block 10.6 a determination of the core/ROFS split for each component is performed dependent on the paged status of the components, and the dependent components. Further details of this process are shown in the flow diagram of Figure 11, discussed later.
In the example of Figure 10, once the core/ROFS split determination has been performed for each component, at block 10.8 the core ROM image is built to include the components determined to be in the core, and correspondingly the ROFS is also built to contain the components determined to be in the ROFS. At block 10.10, which in this example would be performed for each different type of device, the core ROM image which is built at block 10.8 is stored in the NAND Flash, during the smartphone device manufacturing process. Thus, it will be appreciated that blocks 10.2 to 10.8 should be performed during the device design process, whereas block 10.10 is performed during the device manufacturing process in this example.
Turning now to Figure 11, the process involved in the core/ROFS split determination of block 10.6 of the first embodiment will now be described in more detail. Prior to this description, it is worth recalling that software components for which the core/ROFS split determination is to be performed can be marked as either paged or unpaged by changing a flag in the header of the component (for executable components only) or by adding a keyword to the instruction file (the OBEY file, or .OBY file) that places files in ROM. The default behaviour of unmarked executable components i.e. those components which have neither a paged nor unpaged marking can also be specified.
Unmarked non-executable components will be paged.
With the above in mind, at block 11.2 the first block in the process of 10.6 is performed. Here, an evaluation is made as to whether the present component which is being evaluated is an executable component. If the component is not an executable, then processing proceeds to block 11.4, wherein the paged status of the component is evaluated. This evaluation is performed by looking at the instruction file (the OBY file, in this example) which determines the how paged status reads. In the example, if the paged status is that the component is paged, then at block 11.6 the component is placed into the core ROM image. If the paged status is that the component is unpaged, then at block 11.18 the component is placed in the primary ROFS image.
Returning to block 11.2 of the example, if it was determined here that the present component is in fact an executable, then at block 11.8 an evaluation is performed as to what is the default paging behaviour for executables. In this example if this specifies that all executables are always to be paged, then processing proceeds to block 11.6, wherein the component is placed in the core ROM image. Conversely, if the default executable paging behaviour is that executables are never to be paged, then processing proceeds to block 11.18, wherein the component is placed in the primary ROFS image.
On the other hand, if the default executable paging behaviour is neither always paged or never paged, then it is instead dependent on the particular markings for that component, i.e. either paged or unpaged, and processing proceeds to the evaluation of block 11.10, wherein these markings are examined.
In this example, at block 11.10 the paged status of the component in the OBY file is examined. If there is no marking for this component, then processing proceeds to block 11.12. However, if the marking is such that the component is marked as paged, then processing proceeds to block 11.6, and the component is placed in the core ROM image. If the component marking is "unpaged", then processing proceeds to a second evaluation process of block 11.16, wherein it is determined whether the component has any paged components dependent upon it. If the answer to this is positive i.e. the component does have paged components dependent upon it, then even though the component itself is unpaged, it is placed, at block 11.6, in the core ROM image. The reason for this in the current example is to maintain the system criterion that a paged component which is present in the core ROM image should also have its dependencies present in the core ROM image, even if those dependencies are not themselves paged. If, at block 11.16 the component is determined not to have any paged components dependent upon it, then there is no need for the unpaged component to be placed in the core ROM image, and at block 11.18 it can instead be placed in the primary ROFS image.
Returning to block 11.10 of the example, as mentioned, if the page status in the OBY file is unmarked, then processing proceeds to block 11.12, wherein the executable header is examined to determine if that contains a marking as to the paged status. If here there is a marking that the component is paged, then processing proceeds to block 11.16, wherein the component is placed in the core ROM image. If the marking is such that the component is unpaged, then processing proceeds to block 11.16, wherein an evaluation is performed as to whether the component has paged dependencies dependent upon it. If yes, then the component is placed in the core ROM image at block 11.6, for the same reasons as previously. If no, then the component is placed in the primary ROFS image, at block 11.18.
If the evaluation of block 11.12 of the example indicates that the executable header has no page markings, then processing proceeds to block 11.14. Here the default behaviour for unmarked executables is followed. If the default behaviour for unmarked executables is to page the executable, then the component is placed in the core ROM image at block 11.6, otherwise, if the default behaviour is to have unmarked executables unpaged, then the evaluation of block 11.16 is again performed. Here, if the component has paged components dependent upon it then the component is placed in the core ROM image, whereas if the component has no paged components dependent upon it then the component is placed in the primary ROFS image.
In this specific example, the above processing is performed in turn for every component, to determine whether the component should be placed in the core ROM image, or the primary ROFS image. Once every component has been processed, at the end of the procedure, both the contents of the core ROM image and the primary ROFS image have been obtained i.e. the ROM contents have been built. During a subsequent device manufacturing process, therefore, the ROM contents output from this method can be loaded into the NAND Flash in the device.
With this example embodiment, therefore, paged components are placed in the core ROM image, together with their dependencies, whether the dependent components are paged or not. If a component is unpaged, and has no paged dependencies, then it is placed in the primary ROFS image. By placing paged components in the core ROM image, then the benefits of demand paging can be obtained for those components for which demand paging is suitable. In many cases a RAM saving will be obtained by doing this, however whether a RAM saving is obtained in a particular case will depend upon the size of the unpaged dependent components which have also to be included in the core ROM image. Generally a RAM saving will be obtained using this example embodiment, unless any one of the unpaged dependent components is particularly large.
In a further example embodiment to be described next, further processing is performed to determine whether or not unpaged components are in fact in a privileged set of components that should be in the core ROM image anyway. The second example embodiment is based upon the realisation that some unpaged components may in practice be used (and hence loaded into RAM) all or most of the time, irrespective of whether they are placed in a core ROM image, or the primary ROFS image. If this is the case i.e. that such components are in any event loaded into RAM all or most of the time, then those components may as well be placed into the core ROM image. However, other unpaged components are placed into the ROFS image, and even if there are paged components which are dependent on such unpaged components, then the paged components are placed into the ROFS image with the unpaged components on which they are dependent. The processing performed in the second example embodiment is shown in Figures 12 to 14.
More particularly, with reference to Figure 12, at block 12.2 the components which are to be loaded onto the device are determined, and at block 12.4 a determination is made as to whether each component which is to be loaded is "paged", or "unpaged". The factors considered in these determinations are the same as for blocks 10.2 and blocks 10.4, discussed previously, and hence no further discussion will be undertaken.
Once the components to be loaded have been determined, and whether each component is paged or unpaged, within the second embodiment, at block 12.6 a "privileged set" of unpaged components is found. Figure 13 shows the procedure to be performed to build the privileged set of unpaged components.
With reference to Figure 13 illustrating this example embodiment, at block 13.2 a ROM is created with most components (other than components for the kernel) placed in the ROFS, in a manner similar to that shown previously in layout A of Figure 2. This ROM is then installed on a test device. At block 13.4 the device is booted, and a particular use scenario is run on the device. For example, in the case of a smartphone, the use scenario may be performing a call, sending an email, or the like.
While the use scenario is being performed in accordance with this example embodiment, at block 13.6 the loading of the software components into RAM is monitored, and a list is compiled of which components are loaded into RAM during the use test. Once the use test is over, at block 13.8 the list can be examined, and a second list compiled of which components of those which were loaded into RAM were in fact unpaged components. At block 13.10 the unpaged components which were loaded into RAM during the use test are recorded as members of the privileged set. Thus, the privileged set comprises a list of software component names, which are all unpaged components, but which were loaded into RAM during the use test scenario. It should be understood that several use tests testing different scenarios can be performed in accordance with embodiments of the invention, such that the privileged set can contain the names of those unpaged components which are loaded into RAM during several different uses of the device. Of course, whether there are in fact several uses of the device will depend upon the device itself. For example, an MP3 player which only plays stored MP3 files may not have any further uses. However, an MP3 player which also has an in-built radio may have the additional radio use.
Returning to Figure 12, after the privileged set of components has been compiled in this example embodiment, at block 12.8 the core/ROFS split determination is performed for each component. This is performed in dependence on the paged status of the component itself as well as its dependencies, and also whether the dependent components are a member of the privileged set. The core/ROFS split determination is repeated for each component which is to be installed on the device. At the end of block 12.8, therefore, there will have been obtained a core ROM image containing those components which are to be in the core image, and a primary ROFS image, containing those components which are to be in the primary ROFS. At block 12.10 therefore, the core ROM image can be built, as well as the primary ROFS image. Finally, at block
12.12, during the manufacturing process for a device in accordance with this example, the core ROM image and primary ROFS image obtained throughout block 12.8 and block 12.10 can be stored on the NAND Flash in the device.
Further details of the method performed during block 12.8 will now be described with respect to Figure 14.
More particularly, Figure 14 shows the method performed during the core/ROFS split determination of block 12.8. In this example the procedure shown in Figure 14 is repeated for each component for which the core/ROFS split determination needs to be made.
Referring to Figure 14, for the component for which the determination is presently being made, a first evaluation is performed at block 14.2 as to whether the component is an executable component. If the component is not an executable component, then at block 14.4 the paged status of the component in the OBY file is examined in this example. If the status is that the component is "paged" then at block 14.6 the component is placed in the core ROM image, whereas otherwise if the status is "unpaged", then at block 14.14 the component is placed in the primary ROFS image.
If at block 14.2 of this example it is determined that the component is an executable, then processing proceeds to a second evaluation at block 14.8, wherein the default paging behaviour for executables is examined. If the default paging behaviour for executables is that all executables should be paged, then of course the component must be placed in the core ROM image, at block 14.6. However, if the default paging behaviour for executables is that executables should not be paged, then a component should be placed in the primary ROFS image, at block 14.14. If, however, there is no such default paging behaviour specified, then processing proceeds to block 14.10, wherein a second evaluation is performed on the particular paged or unpaged marking of the particular component. In particular, the paged status of the particular component is examined in the OBY file. If the paged status is that the component is paged, then processing proceeds to a further evaluation, at block 14.12. This is an evaluation as to whether all of the components dependences are paged, or whether its dependences are in the privileged set. If this evaluation returns positive, then this means that not only is the component itself paged, but that its dependencies are paged, or are unpaged but are in the privileged set of unpaged components which will in any event be placed in the core ROM image. If this is the case, then the component is suitable for paging, together with its dependencies, and hence is placed in the core ROM image at block 14.6.
However, if the paged status in the OBY file is that the component is unpaged, then processing proceeds to the evaluation of block 14.20. Here an evaluation is performed as to whether the component is listed in the privileged set of unpaged components, which in any event should be placed in the core ROM image to be loaded into RAM. If this is the case i.e. the component is in the privileged set, then processing proceeds to block 14.6, wherein the component is placed in the core ROM image. If this is not the case, i.e. the component is unpaged, and is not in the privileged set, then the component is placed in the primary ROFS image, at block 14.14.
In this example embodiment if there is no paged status in the OBY file for the component at block 14.10 i.e. the component is unmarked, then processing proceeds to block 14.16, wherein the header of the executable is examined to determine whether there is a paged or unpaged marking in the header. If the executable header indicates that the component is a paged component, then processing proceeds to block 14.12, wherein the evaluation is performed as to whether all of the components dependencies are paged or whether the dependencies are in the privileged set. The reason for this is as described previously; a paged executable is only placed in the core ROM image if its dependencies will also be placed in the core ROM image i.e. they are either paged themselves, or in the privileged set of unpaged components which are placed in the core ROM image in any event. If this is the case i.e. the evaluation of block 14.12 returns a positive, then the component is placed in the core ROM image at block 14.6. However, if this is not the case i.e. not all of the component's dependencies are paged and neither are they in the privileged set, then the component is placed in the primary ROFS image at block 14.14.
If at block 14.16 of the example it is determined that the executable header does not have a marking as to whether the executable is paged or unpaged, then processing proceeds to a final evaluation at block 14.18.
At block 14.18 the default executable paging behaviour for the executable is examined. If this is that the executable should be paged, then processing proceeds to block 14.12, wherein the paged status of the component's dependencies, or whether the dependencies are in the privileged set, is examined. In the current example if all of the component's dependencies are either paged, or they are all in the privileged set, then the component itself can be placed in the core ROM image, at block 14.6. Conversely, if the component's dependencies are not paged or all in the privileged set, then the component is placed in the primary ROFS image, at block 14.14.
If the default behaviour for the executable is that it is unpaged, as evaluated at block 14.18, then a second evaluation is performed at block 14.20 to determine whether the component is in the privileged set of unpaged components which in any event need to be placed in the core ROM image. If this is the case, then the component is placed in the core ROM image at block 14.6. If this is not the case, then the component is placed in the primary ROFS image, at block 14.14. With the above example method, therefore, a ROM is obtained which contains those paged components whose dependencies are all either paged or in the privileged set in the core ROM image, and with other components in the primary ROFS image. In this way, the core ROM image contains those components which are in any event almost always loaded into RAM in any case, together with those components which are capable of being paged. Hence, the benefits of demand paging in terms of RAM savings can be obtained.
Various modifications may be made to the above described embodiments to provide further embodiments. For example, in the second example embodiment, the privileged set was determined in dependence upon whether the unpaged components in the privileged set were in any event loaded into RAM during one or more test use cases. Thus, to determine the privileged set it was necessary to test the device using the use cases in advance.
In a further embodiment, however, the privileged set can be determined in a different way, and in particular based upon whether the components form part of the operating system kernel or not. If a component is a kernel component, then it is likely that it will almost always be loaded into RAM irrespective of the use case. Thus, a privileged set can be compiled dependent on whether the component is a kernel component. The same procedure as shown in Figure 14 can then be used, but with the different privileged set. This would result in paged components which have dependencies all of which are paged being placed in the core ROM image, but paged components which have unpaged dependencies which are not in the privileged set would be placed in the ROFS image. Unpaged components would automatically be placed in the ROFS image, unless they were kernel components, and hence in the privileged set.
Using some embodiments of the invention therefore, a ROM image may be built which can be stored in NAND flash memory, which contains a core ROM image with those components which have been determined to be in the core ROM image so as to be suitable for demand paging, and a primary ROFS image containing those components which will not be demand paged. Figure 15 shows such a ROM in accordance with an example embodiment, which is then stored in a device in NAND Flash. Because the core ROM image contains a large amount of paged data the XIP ROM image in RAM on the device is smaller, as indicated in Figure 3, and hence significant RAM savings can be made.
Embodiments of the present invention can provide an improved technique for determining which components of an operating system (or other software programs) should be included in an area of a memory which is capable of being paged into RAM, and which components should be included in an area of memory from which only whole components at a time are read into RAM. More particularly, embodiments of the invention can provide a technique which makes a decision as to whether a software component should be placed in the pageable area of the memory in dependence on whether the software component itself is capable of being divided into memory pages (i.e. whether the component is "paged"). In some embodiments, as well as looking at the software component itself, the dependencies of the component (i.e. the other software components on which the first component relies for its operation) are also examined to determine if they are capable of being divided into memory pages, and if they are so capable then the component and the dependencies are included in the pageable area of the memory. If the dependencies are not capable of being paged (i.e. are "unpaged"), then the component and the dependencies should not be included in the pageable area of the memory.
In further embodiments, a "privileged set" of components is compiled of components which should be included in the pageable area in any event, even if the components themselves are not paged. The decision as to whether a particular component should be placed in the pageable area of the memory is then made in dependence on whether the component and its dependencies are paged, and also in dependence on whether the dependencies are in the privileged set.
Using some embodiments the contents of a memory in terms of which software components should be stored in which part of the memory can be determined to help to ensure that the primary benefits of demand paging in terms of providing a RAM saving are obtained. Saving RAM in the device will typically lead to a reduction in the component cost of the device. The software component may be stored in the first part of the memory if the component is capable of being divided into memory pages for loading into and out of RAM. This can help to ensure that paged components, which are capable of being subjected to demand paging, are stored in the part of the memory in which demand paging is performed, and hence benefits of demand paging can be obtained.
Moreover, in some embodiments the software component is stored in the first part of the memory if the component is a dependency of another component which is capable of being divided into memory pages for loading into and out of RAM. This can help to ensure that unpaged dependencies of a paged component are also included in the part of the memory which is paged. If the above condition is not met, then the software component may be stored in the second part of the memory. This can avoid the first part of the memory becoming too large, hence allowing RAM savings to be made.
In another embodiment the software component is stored in the first part of the memory or the second part of the memory in further dependence on the determination as to whether other software components which are dependencies of the component are capable of being divided into memory pages for loading into and out of RAM. In this embodiment, the software component is stored in the first part of the memory if it is capable itself of being divided into memory pages for loading into and out of RAM and the other software components which are dependencies of the component are also all capable of being divided into memory pages for loading into and out of RAM. This can help to ensure that only paged components which can be subject to demand paging are placed in the first part of the memory, and hence RAM is not wasted in storing unpaged components which are there simply because they are a dependency of a paged component.
Additionally, in an example embodiment the software component is stored in the first part of the memory if it is capable itself of being divided into memory pages for loading into and out of RAM and the other software components which are dependencies of the component are members of a predetermined privileged set of components. This example implementation recognises the fact that there are some unpaged components which are in any event stored in RAM almost all of the time. If these components are dependencies of a paged component, then that paged component should be included in the part of the memory which can be paged.
In a further embodiment the software component is stored in the first part of the memory if it is a member of a predetermined privileged set of components. In this embodiment the predetermined privileged set comprises those software components which during use of a computing device comprising the set of components are in any event loaded into RAM. This example embodiment recognises that if the component is in any event loaded into RAM during use then the component may as well be placed in the first part of the memory.
In one embodiment the components in the set are those components which are loaded into RAM during one or more test use cases of the device. This allows actual usage of the device to be used to optimise which components should be stored where. In another embodiment the components in the set are those components which are loaded into RAM because they are kernel components of the computing device's operating system. These are components which need to be loaded to allow the device to operate.
In some embodiments the memory is of a type which is incapable of supporting eXecute-In-Place (XIP) operations, which is why the software components need to be loaded into RAM for execution. Preferably the memory is NAND Flash memory, which is used in many modern devices because it provides large memory capacity at relatively lower cost than other types of memory.
Whilst some of the above described embodiments are discussed in the context of the example of a smartphone, it should be understood that in other embodiments different types of device may be provided, for various different functions. For example, the techniques of the present invention may be used to provide embodiments with different applications, such as for example, as a general purpose computer, or as a portable media player, or other audio visual device, such as a camera. Any device or machine which incorporates a computing device provided with RAM into which data and programs need to be loaded for execution may benefit from the invention and constitute an embodiment thereof. The invention may therefore be applied in many fields, to provide improved devices or machines that require less RAM to operate than had heretofore been the case.
In addition, whilst embodiments have been described in respect of a smartphone running Symbian OS, which makes use of a combined file system, it should be further understood that this is presented for illustration only, and in other embodiments the concepts of the demand paging algorithms described herein may be used in other devices, and in particular devices which do not require a split file system such as the composite file system described. Instead, the demand paging algorithm herein described may be used in any device in which virtual memory techniques involving paging programs and data into memory for use by a processor may be used.
Various modifications, including additions and deletions, will be apparent to the skilled person to provide further embodiments, any and all of which are intended to fall within the appended claims. It will be understood that any combinations of the features and examples of the described embodiments of the invention may be made within the scope of the invention.

Claims

Claims
1. A method comprising :- determining if a software component is capable of being divided into memory pages for loading into and out of random access memory (RAM); storing the software component in a first part of a memory or a second part of the memory in dependence on the determination as to whether the component is capable of being divided into memory pages for loading into and out of RAM; wherein the first part of the memory is a part from which software components can be paged in pages from the memory into RAM for execution, and the second part of the memory is a part from which whole components are read into RAM for execution, without being paged.
2. A method according to claim 1, wherein the software component is stored in the first part of the memory if the component is capable of being divided into memory pages for loading into and out of RAM.
3. A method according to claim 2, wherein the software component is stored in the first part of the memory if the component is a dependency of another component which is capable of being divided into memory pages for loading into and out of RAM.
4. A method according to claim 3, wherein otherwise the software component is stored in the second part of the memory.
5. A method according to claim 1, wherein the software component is stored in the first part of the memory or the second part of the memory in further dependence on the determination as to whether other software components which are dependencies of the component are capable of being divided into memory pages for loading into and out of RAM
6. A method according to claim 5, wherein the software component is stored in the first part of the memory if it is capable itself of being divided into memory pages for loading into and out of RAM and the other software components which are dependencies of the component are also all capable of being divided into memory pages for loading into and out of RAM.
7. A method according to claim 1, 5, or 6, wherein the software component is stored in the first part of the memory if it is capable itself of being divided into memory pages for loading into and out of RAM and the other software components which are dependencies of the component are members of a predetermined privileged set of components.
8. A method according to any of the preceding claims, wherein the software component is stored in the first part of the memory if it is a member of a predetermined privileged set of components.
9. A method according to claims 7 or 8, wherein the predetermined privileged set comprises those software components which during use of a computing device comprising the set of components are in any event loaded into RAM.
10. A method according to claim 9, wherein the components in the set are those components which are loaded into RAM during one or more test use cases of the device.
11. A method according to claim 9, wherein the components in the set are those components which are loaded into RAM because they are kernel components of an operating system of the computing device.
12. A method according to any of the preceding claims, wherein the memory is of a type which is incapable of supporting eXecute-In-Place (XIP) operations.
13. A method according to claim 12, wherein the memory is NAND Flash memory.
14. Apparatus comprising :- a processor; a memory having a first part and a second part; and random access memory (RAM); wherein the processor is arranged to cause the apparatus to:- i) determine if a software component is capable of being divided into memory pages for loading into and out of random access memory (RAM); and ii) store the software component in the first part of the memory or the second part of the memory in dependence on the determination as to whether the component is capable of being divided into memory pages for loading into and out of RAM; wherein the first part of the memory is a part from which software components can be paged in pages from the memory into RAM for execution, and the second part of the memory is a part from which whole components are read into RAM for execution, without being paged.
15. A memory having a first part from which software components can be paged in pages from the memory into a RAM of a computing device for execution, and a second part from which whole components are read into RAM for execution, without being paged, wherein the memory has stored in the first part and the second software components which have been stored in the first part or the second part using the method and apparatus of any of the preceding claims.
16. A memory according to claim 15, wherein the memory is of a type which is incapable of supporting eXecute-In-Place (XIP) operations.
17. A memory according to claim 16, wherein the memory is NAND Flash memory.
18. A method substantially as hereinbefore described with reference to any of Figures 10 to 14.
19. A computer program configured to perform the method of any of claims 1 to 13.
PCT/FI2009/050464 2008-05-30 2009-06-01 Method and apparatus for storing software components in memory WO2009144386A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0809954A GB2460636A (en) 2008-05-30 2008-05-30 Storing operating-system components in paged or unpaged parts of memory
GB0809954.1 2008-05-30

Publications (1)

Publication Number Publication Date
WO2009144386A1 true WO2009144386A1 (en) 2009-12-03

Family

ID=39637948

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2009/050464 WO2009144386A1 (en) 2008-05-30 2009-06-01 Method and apparatus for storing software components in memory

Country Status (2)

Country Link
GB (1) GB2460636A (en)
WO (1) WO2009144386A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11640308B2 (en) * 2021-02-19 2023-05-02 Macronix International Co., Ltd. Serial NAND flash with XiP capability

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5754817A (en) * 1994-09-29 1998-05-19 Intel Corporation Execution in place of a file stored non-contiguously in a non-volatile memory
US6349355B1 (en) * 1997-02-06 2002-02-19 Microsoft Corporation Sharing executable modules between user and kernel threads
US20070043938A1 (en) * 2003-08-01 2007-02-22 Symbian Software Limited Method of accessing data in a computing device
US20070157001A1 (en) * 2006-01-04 2007-07-05 Tobias Ritzau Data compression method for supporting virtual memory management in a demand paging system
EP1811384A2 (en) * 2005-12-27 2007-07-25 Samsung Electronics Co., Ltd. Demand paging in an embedded system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5339406A (en) * 1992-04-03 1994-08-16 Sun Microsystems, Inc. Reconstructing symbol definitions of a dynamically configurable operating system defined at the time of a system crash
US6032240A (en) * 1997-11-12 2000-02-29 Intergraph Corporation Bypassing a nonpaged pool controller when accessing a remainder portion of a random access memory
US6804766B1 (en) * 1997-11-12 2004-10-12 Hewlett-Packard Development Company, L.P. Method for managing pages of a designated memory object according to selected memory management policies
US6332172B1 (en) * 1998-05-29 2001-12-18 Cisco Technology, Inc. Method and system for virtual memory compression in an embedded system
GB0504326D0 (en) * 2005-03-02 2005-04-06 Symbian Software Ltd Dual mode operating system for a computing device
US7496708B2 (en) * 2006-07-19 2009-02-24 International Business Machines Corporation Boot read-only memory (ROM) configuration optimization

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5754817A (en) * 1994-09-29 1998-05-19 Intel Corporation Execution in place of a file stored non-contiguously in a non-volatile memory
US6349355B1 (en) * 1997-02-06 2002-02-19 Microsoft Corporation Sharing executable modules between user and kernel threads
US20070043938A1 (en) * 2003-08-01 2007-02-22 Symbian Software Limited Method of accessing data in a computing device
EP1811384A2 (en) * 2005-12-27 2007-07-25 Samsung Electronics Co., Ltd. Demand paging in an embedded system
US20070157001A1 (en) * 2006-01-04 2007-07-05 Tobias Ritzau Data compression method for supporting virtual memory management in a demand paging system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HANDLEY D.: "Demand Paging on Symbian OS", I.Q. MAGAZINE ONLINE, vol. 7, no. 2, April 2008 (2008-04-01), pages 71 - 76, Retrieved from the Internet <URL:http://www.iqmagazineonline.com/IQ/IQ23/pdfs/IQ23_pgs71-76.pdf> [retrieved on 20090921] *
HANDLEY, D.: "Demand Paging on Symbian OS", TECHONLINE, TECHNICAL PAPERS [ONLINE], Retrieved from the Internet <URL:http://www.techonline.com/learning/techpaper/208403594> [retrieved on 20090921] *
SALES, J.: "Demand Paging on Symbian", 25 June 2009 (2009-06-25), Retrieved from the Internet <URL:http://www.scribd.com/doc/16775509/Demand-Paging-on-Symbian-Online-Book> [retrieved on 20090921] *

Also Published As

Publication number Publication date
GB0809954D0 (en) 2008-07-09
GB2460636A (en) 2009-12-09

Similar Documents

Publication Publication Date Title
US9021243B2 (en) Method for increasing free memory amount of main memory and computer therefore
KR100900439B1 (en) Method and Apparatus for managing out-of-memory in embedded system
JP5422652B2 (en) Avoiding self-eviction due to dynamic memory allocation in flash memory storage
KR20140118093A (en) Apparatus and Method for fast booting based on virtualization and snapshot image
US10789184B2 (en) Vehicle control device
CN114546634B (en) Management of synchronous restart of system
JP2014178913A (en) Electronic apparatus, method of creating snapshot image, and program
US9063868B2 (en) Virtual computer system, area management method, and program
CN111427804B (en) Method for reducing missing page interruption times, storage medium and intelligent terminal
US9037773B2 (en) Methods for processing and addressing data between volatile memory and non-volatile memory in an electronic apparatus
US10346234B2 (en) Information processing system including physical memory, flag storage unit, recording device and saving device, information processing apparatus, information processing method, and computer-readable non-transitory storage medium
JP2008532163A5 (en)
WO2009144383A1 (en) Memory management method and apparatus
KR100994723B1 (en) selective suspend resume method of reducing initial driving time in system, and computer readable medium thereof
WO2009144386A1 (en) Method and apparatus for storing software components in memory
JP2015035007A (en) Computer, control program, and dump control method
US7577814B1 (en) Firmware memory management
CN112654965A (en) External paging and swapping of dynamic modules
JP6217008B2 (en) Electronic device, control method, and program
US20090031100A1 (en) Memory reallocation in a computing environment
WO2009144384A1 (en) Memory paging control method and apparatus
US20050027954A1 (en) Method and apparatus to support the maintenance and reduction of FLASH utilization as it pertains to unused or infrequently referenced FLASH data
WO2009144385A1 (en) Memory management method and apparatus
US20080072009A1 (en) Apparatus and method for handling interrupt disabled section and page pinning apparatus and method
JP2017004522A (en) Memory protection unit, memory management unit, and microcontroller

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09754039

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09754039

Country of ref document: EP

Kind code of ref document: A1