GB2460636A - Storing operating-system components in paged or unpaged parts of memory - Google Patents
Storing operating-system components in paged or unpaged parts of memory Download PDFInfo
- Publication number
- GB2460636A GB2460636A GB0809954A GB0809954A GB2460636A GB 2460636 A GB2460636 A GB 2460636A GB 0809954 A GB0809954 A GB 0809954A GB 0809954 A GB0809954 A GB 0809954A GB 2460636 A GB2460636 A GB 2460636A
- Authority
- GB
- United Kingdom
- Prior art keywords
- memory
- ram
- component
- components
- paged
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 claims abstract description 44
- 238000012360 testing method Methods 0.000 claims description 13
- 235000013490 limbo Nutrition 0.000 claims 1
- 238000012545 processing Methods 0.000 description 24
- 230000001419 dependent effect Effects 0.000 description 21
- 238000011156 evaluation Methods 0.000 description 20
- 238000010586 diagram Methods 0.000 description 17
- 230000008901 benefit Effects 0.000 description 15
- 230000006399 behavior Effects 0.000 description 14
- 230000008569 process Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 4
- 239000002131 composite material Substances 0.000 description 4
- 230000006872 improvement Effects 0.000 description 4
- 235000005282 vitamin D3 Nutrition 0.000 description 4
- 239000011647 vitamin D3 Substances 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 230000003466 anti-cipated effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000012938 design process Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003716 rejuvenation Effects 0.000 description 1
- 238000009958 sewing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/10—Address translation
- G06F12/1027—Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
- G06F12/1036—Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] for multiple virtual address spaces, e.g. segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/445—Program loading or initiating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
- G06F12/0246—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/06—Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
- G06F12/0638—Combination of memories, e.g. ROM and RAM such as to permit replacement or supplementing of words in one module by words in another module
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/445—Program loading or initiating
- G06F9/44557—Code layout in executable memory
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Stored Programmes (AREA)
Abstract
The invention concerns a method for determining which components of an operating system (or other software programs) should be included in an area of a memory which is capable of being paged into RAM, and which components should be included in an area of memory from which only whole components are read into RAM. Whether a software component should be placed in the pageable area of the memory depends on whether or not the software component is capable of being divided into memory pages (i.e. "paged" or "unpaged"), for example by checking a flag in a header of the component or a keyword in an instruction file. Dependencies of a component (i.e. the other software components on which a component relies for its operation) may also be examined to determine if they are capable of being divided into memory pages. The component and the dependencies are included or not included in the pageable area of the memory accordingly.
Description
Method and System for Determining Memory Contents in an Operating System
Technical Field
The present invention relates to a method and system for determining which components of a device operating system are required to reside in particular areas of memory, and in particular to such a method which allows memory paging techniques to be used to reduce the amount of physical memory required in a device. The invention also relates to a memory having contents determined by such a method.
Background to the Invention
Many modern electronic devices make use of operating systems. Modern operating systems can be found on anything composed of integrated circuits, like personal computers, Internet servers, cell phones, music players, routers, switches, wireless access points, network storage, game consoles, digital cameras, DVD players, sewing machines, and telescopes. An operating system is the software that manages the sharing of the resources of the device, and provides programmers with an interface to access those resources. An operating systems processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs on the system. At its most basic, the operating system performs tasks such as controlling and allocating memory, prioritising system requests, controlling input and output devices, facilitating networking, and managing files. An operating system is in essence an interface by which higher level applications can access the hardware of the device.
Many modern electronic devices which make use of operating systems have as their basis a similar physical hardware architecture, making use of an application processor provided with suitable memory which stores the device operating system, as well as the higher level application programs which determine the functionality of the device. The operating system and other programs are typically stored in non-volatile Read-Only Memory, and the operating system is loaded first, to allow the application process to then run the higher level application programs. One very common modern electronic device which makes use of an operating system is a smartphone, the generic hardware architecture for which is shown in Figure 1.
With reference to Figure 1, a typical smartphone 10 comprises hardware to perform the telephony functions, together with an application processor and corresponding support hardware to enable the phone to have other functions which are desired by a smartphone, such as messaging, calendar, word processing functions and the like. In Figure 1 the telephony hardware is represented by the RF processor 102 which provides an RF signal to antenna 126 for the transmission of telephony signals, and the receipt therefrom. Additionally provided is baseband processor 104, which provides signals to and receives signals from the RF Processor 102. The baseband processor 104 also interacts with a subscriber identity module 106, as is well known in the art. The telephony subsystem of the smartphone 10 is beyond the scope of the present invention.
Also typically provided is a display 116, and a keypad 118. These are controlled by an application processor 108, which is often a separate integrated circuit from the baseband processor 104 and RF processor 102, although in the future it is anticipated that single chip solutions will become available. A power and audio controller 120 is provided to supply power from a battery (not shown) to the telephony subsystem, the application processor, and the other hardware. Additionally, the power and audio controller 120 also controls input from a microphone 122, and audio output via a speaker 124.
In order for the application processor 108 to operate, various different types of memory are often provided. Firstly, the application processor 108 may be provided with some Random Access Memory (RAM) 112 into which data and program code can be written and read from at will. Code placed anywhere in RAM can be executed by the application processor 108 from the RAM.
Additionally provided often is separate user memory 110, which is used to store user data, such as user application programs (typically higher layer application programs which determine the functionality of the device), as well as user data files, and the like.
As mentioned previously, in order for the application processor 108 to operate, an operating system is necessary, which must be started as soon as the smartphone system is first switched on. The operating system code is commonly stored in a Read-Only Memory, and in modern devices, the Read-Only Memory is often NAND Flash ROM 114. The ROM will store the necessary operating system component in order for the device 10 to operate, but other software programs may also be stored, such as application programs, and the like, and in particular those application programs which are mandatory to the device, such as, in the case of a smartphone, communications applications and the like. These would typically be the applications which are bundled with the smartphone by the device manufacturer when the phone is first sold. Further applications which are added to the smartphone by the user would usually be stored in the user memory 110.
ROM (Read-Only Memory) traditionally refers to memory devices that physically store data in a way which cannot be modified. These devices also allow direct random access to their contents and so code can be executed from them directly -code is eXecute-In-Place (XIP). This has the advantage that programs and data in ROM are always available and don't require any action to load them into memory.
In the case of smartphones, a well known operating system is that produced by the present applicant, known as Symbian OS. In Symbian OS, the term ROM has developed the looser meaning of data stored in such a way that it behaves like it is stored in read-only memory'. The underlying media may actually be physically writeable, like RAM or flash memory but the file system presents a ROM-like interface to the rest of the OS, usually as drive Z:.
The ROM situation is further complicated when the underlying media is not XIP. This is the case for NAND flash, used in many modem devices. Here it is necessary to copy (or shadow) any code in NAND to RAM, where it can be executed in place. The simplest way of achieving this is to copy the entire ROM contents into RAM during system boot arid use the Memory Management Unit (MMU) to mark this area of RAM with read-only permissions. The data stored by this method is called the Core ROM image (or just Core image) to distinguish it from other data stored in NAND. The Core image is an XIP ROM and is usually the only one; it is permanently resident in RAM.
Figure 2, layout A shows how the NAND flash 20 is structured in this simple case. All the ROM contents 22 are permanently resident in RAM and any executables in the user data area 24 (usually the C: or D: drive) are copied into RAM as they are needed.
The above method is costly in terms of RAM usage so a more efficient scheme was developed that (broadly speaking) splits the ROM contents into those parts required to boot the OS, and everything else. The former is placed in the Core image as before and the latter is placed into another area called the Read-Only File System (ROFS). Code in ROFS is copied into RAM as it is needed at runtime, at the granularity of an executable (or other whole file), in the same way as executables in the user data area. In Symbian OS the component responsible for doing this is the Loader', which is part of the File Server process. Herein, executables' means any executable code, including DLL (dynamic link library) functions.
Potentially, there are several ROFS images, for example localisation andlor operator-specific images. Usually, the first one (called the primary ROFS) is combined with the Core image into a single ROM-like interface by what is known as the Composite File System.
Layout B in Figure 2 shows an ordinary Composite File System structure. Here, ROM is divided into the Core Image 32 comprising those components of the OS which will always be loaded into RAM, and the ROFS 34 containing those components which do not need to be continuously present in RAM, but which can be loaded in and out of RAM as required. As mentioned, components in the ROFS 34 are loaded in and out of RAM as whole components when they are required (in the case of loading in) or not required. Comparing this to layout A, it can be seen that layout B is more RAM-efficient because some of the contents of the ROFS 34 are not copied into RAM at any given time. The more unused files there are in the ROFS 34, the greater the RAM saving.
Since an XIP ROM image on NAND is actually stored in RAM, an opportunity arises to demand page the contents of the XIP ROM. That is, read its data contents from NAND flash into RAM (where it can be executed), on demand. This is called XIP ROM Paging (or demand paging). Here, "paging" refers to reading in required segments ("pages") of executable code into RAM as they are required, at a finer granularity than that of the entire executable. Typically, page size may be around 4kB; that is, code can be read in and out of RAM as required in 4kB chunks. A single executable may comprise a large number of pages. Paging is therefore very different from the operation of the ROFS, for example, wherein whole executables are read in and out of RAM as they are required to beriin.
An XIP ROM image is split into two parts. One containing unpaged data and one containing data paged on demand. Unpaged data is those executables and other data which cannot be split up into pages. Unpaged data consists of kernel-side code plus those parts that should not be paged for other reasons (e.g. performance, robustness, power management, etc). The terms locked down' or wired' are also used to mean unpaged. Paged data is those executables and other data which can be split up into pages.
At boot time, the unpaged area at the start of the XIP ROM image is loaded into RAM as normal but the linear address region normally occupied by the paged area is left unmapped -i.e. no RAM is allocated for it.
When a thread accesses memory in the paged area, it takes a page fault. The page fault handler code in the kernel then allocates a page of RAM and reads the contents for this from the XIP ROM image contained on storage media (e.g. NAND flash). As mentioned, a page is a convenient unit of memory allocation, usually 4kB. The thread then continues execution from the point where it took the page fault. This process is called paging in' and is described in more detail later.
When the free RAM on the system reaches zero, memory allocation requests can be satisfied by taking RAM from the paged-in XIP RUM region. As RAM pages in the XIP ROM region are unloaded, they are said to be paged out'. Figure 3 shows the operations just described.
Note that all content in the paged data area of an XIP ROM is subject to paging, not just executable code; accessing any file in this area may induce a page fault. A page may contain data from one or more files and page boundaries do not necessarily coincide with file boundaries.
Figure 2, layout C shows a typical XIP ROM paging structure. Here, ROM 40 comprises an unpaged core area 42 containing those components which should not be paged, and a paged core area 44 containing those components which should reside in the core image rather than the ROFS, but which can be paged. ROFS 46 then contains those components which do not need to be in the Core image. Although the unpaged area of the Core image may be larger than the total Core image in layout B, only a fraction of the contents of the paged area needs to be copied into RAM compared to the amount of loaded ROFS code in layout B. Further details of the algorithm which controls demand paging will now be described.
All memory content that can be demand paged is said to be paged memory' and the process is controlled by the paging subsystem'. A page is typically a 4kB block of RAM, as mentioned, although in different systems other size pages can be used. Here are some other terms that are used: 1. Live Page -A page of paged memory whose contents are currently available.
2. Dead Page -A page of paged memory whose contents are not currently available.
3. Page In -The act of making a dead page into a live page.
4. Page Out -The act of making a live page into a dead page. The RAM used to store the content of this may then be reused for other purposes.
Efficient performance of the paging subsystem is dependent on the algorithm that selects which pages are live at any given time, or conversely, which live pages should be made dead. The paging subsystem approximates a Least Recently Used (LRU) algorithm for determining which pages to page out.
All live pages are stored on the live page list', which is an integral part of the paging cache. Figure 4 shows the live page list. The live page list is split into two sub-lists, one containing young pages and the other, old pages. The MMU is used to make all young pages accessible to programs but the old pages inaccessible. However, the contents of old pages are preserved and they still count as being live.
Figure 5 shows what happens when a page is "paged in". When a page is paged in, it is added to the start of the young list in the live page list, making it the youngest.
The paging subsystem attempts to keep the relative sizes of the two lists equal to a value called the young/old ratio. If this ratio is R, the number of young pages is Ny and the number of old pages is No then if (Ny > RN0), a page is taken from the end of the young list and placed at the start of the old list. This process is called ageing, and is shown in Figure 6.
If an old page is accessed by a program, this causes a page fault because the MMU has marked old pages as inaccessible. The paging subsystem then turns that page into a young page (i.e. rejuvenates it), and at the same time turns the last young page into an old page. This is shown in Figure 7.
When the operating system requires more RAM for another purpose then it may need to obtain the memory used by a live page. In this case the oldest' live page is selected for paging out, turning it into a dead page, as shown in Figure 8. If paging out leaves too many young pages, according to the young/old ratio, then the last young page (e.g. Page D in Figure 8) would be aged.
When a program attempts to access paged memory that is dead', a page fault is generated by the MMU and the executing thread is diverted to the Symbian OS exception handler. This performs the following tasks: I. Obtain a page of RAM from the system's pool of unused RAM (i.e. the free pool'), or if this is empty, page Out the oldest live page and use that instead.
2. Read the contents for this page from some media (e.g. NAND flash).
3. Update the paging cache's live list as described in the previous section.
4. Use the MMU to make this RAM page accessible at the correct linear address.
5. Resume execution of the program's instructions, starting with the one that caused the initial page fault.
Note the above actions are executed in the context of the thread that tries to access the paged memory.
When the system requires more RAM and the free pool is empty then RAM that is being used to store paged memory is freed up for use. This is called paging out' and happens by the following steps: 1. Remove the oldest' RAM page from the paging cache.
2. Use the MMU to mark the page as inaccessible.
3. Return the RAM page to the free pooi.
Although the primary purpose of demand paging is to save RAM, there are at least 2 other potential benefits that may be observed. These benefits are highly dependent on the paging configuration, discussed later.
A first performance benefit is due to so -called "lazy loading". In general, the cost of servicing a page fault means that paging has a negative impact on performance.
However, in some cases demand paging (DP) actually improves performance compared with the non-DP composite file system case (Figure 2, layout B), especially when the use-case normally involves loading a large amount of code into RAM (e.g. when booting or starting up large applications). In these cases, the performance overhead of paging can be outweighed by the performance gain of loading less code into RAM. This is sometimes known as lazy loading' of code.
Note that when the non-DP case consists of a large core image (i.e. something closer to Figure 2, layout A), most or all of the code involved in a use-case will already be permanently loaded, and so the performance improvement of lazy loading will be reduced. The exception to this is during boot, where the cost of loading the whole core image into RAM contributes to the overall boot time.
A second performance improvement lies in improved stability of the device. The stability of a device is often at its weakest in Out Of Memory (OOM) situations. Poorly written code may not cope well with exceptions caused by failed memory allocations.
As a minimum, an OOM situation will degrade the user experience.
If DP is enabled on a device and the same physical RAM is available compared with the non-DP case, the increased RAlly! saving makes it more difficult for the device to go OOM, avoiding many potential stability issues. Furthermore, the RAM saving achieved by DP is proportional to the amount of code loaded in the non-DP case at a particular time. For instance, the RAM saving when 5 applications are running is greater than the saving immediately after boot. This makes it even harder to induce an OOM situation.
Note this increased stability only applies when the entire device is OOM. Individual threads may have OOM problems due to reaching their own heap limits. DP will not help in these cases.
As mentioned, whether the above performance improvements are obtained will depend very much on the demand paging configuration. Demand paging introduces three new configurable parameters to the system. These are: 1. The amount of code and data that is marked as unpaged.
2. The minimum size of the paging cache.
3. The ratio of young pages to old pages in the paging cache.
The first two are the most important and they are discussed below. The third has a less dramatic effect on the system and should be determined empirically.
With respect to the amount of unpaged files, it is important that all areas of the OS involved in servicing a paging fault are protected from blocking on the thread that took the paging fault (directly or indirectly). Otherwise, a deadlock situation may occur. This is partly achieved in Symbian OS by ensuring that all kernel-side components are always unpaged.
In addition to kernel-side components, there are likely to be a number of components that are explicitly made unpaged to meet the functional and performance requirements of the device. The performance overhead of servicing a page fault is unbounded and variable so some critical code paths may need to be protected by making files unpaged.
It may be necessary to make chains of files and their dependencies unpaged to achieve this. It may be possible to reduce the set of unpaged components by breaking unnecessary dependencies and separating critical code paths from non-critical ones.
When making a component unpaged is a straightforward performance/RAM trade-off, this can be made configurable, allowing the device manufacturer to make the decision based on their system requirements.
With respect to the paging cache size, as described previously if the system requires more free RAM and the free RAM pooi is empty, then pages are removed from the paging cache in order to service the memory allocation. This cannot continue indefinitely or a situation will arise where the same pages are continually paged in and out of the paging cache; this is known as page thrashing. Performance is dramatically reduced in this situation.
To avoid catastrophic performance loss, a minimum paging cache size can be defined. If a system memory allocation would cause the paging cache to drop below the minimum size, then the allocation fails.
As paged data is paged in, the paging cache grows but any RAM used by the cache above the minimum size does not contribute to the amount of used RAM reported by the system. Although this RAM is really being used, it will be recycled whenever anything else in the system requires the RAM. So the effective RAM usage of the paging cache is determined by its minimum size.
In theory, it is also possible to limit the maximum paging cache size. However, this is not useful in production devices because it prevents the paging cache from using all the otherwise unused RAM in the system. This may negatively impact performance for no effective RAM saving.
The main advantage of using DP is therefore the RAM saving which is obtained. The easiest way to visualise the RAM saving achieved by DP is to compare the most simplistic configurations. Consider a non-DP ROM consisting of a Core with no ROFS (as in Figure 2, layout A). Compare that with a DP ROM consisting of an XIP ROM paged Core image, again with no ROFS (similar to Figure 2, layout C but without the ROFS). The total ROM contents are the same in both cases. Here the effective RAM saving is depicted by Figure 9 The effective RAM saving is the size of all paged components minus the minimum size of the paging cache. Note that when a ROFS section is introduced, this calculation is much more complicated because the contents of the ROFS are likely to be different between the non-DP and DP cases.
The RAM saving can be increased by reducing the set of unpaged components and/or reducing the minimum paging cache size (i.e. making the configuration more stressed'). Performance can be improved (up to a point) by increasing the set of unpaged components and/or increasing the minimum paging cache size (i.e. making the configuration more relaxed'). However, if the configuration is made too relaxed then it is possible to end up with a net RAM increase compared with a non-DP ROM.
Demand paging is therefore able to present significant advantages in terms of RAM savings, and hence providing an attendant reduction in the manufacturing cost of a device. Additionally, as mentioned above, depending on configuration performance improvements can also be obtained. However, when actually implementing demand paging on a device, a problem can arise in terms of selecting which OS components should actually be included in that part of the ROM (the Core image) which is subject to demand paging, rather than being included in the ROFS. If this selection is not performed correctly, then no RAM savings will in fact be achieved. In such a case, dependent on the DP configuration, it may be that in fact performance overheads are being incurred in the form of page faults with no concurrent benefit in the form of a reduced RAM requirement.
More particularly, one might have thought that given that demand paging can only operate on paged components, then all paged components should be placed in the Core image (where they can then be demand paged), and all unpaged components placed in the ROFS (or other file system). However, it is not possible to simply place all paged components in the core ROM image and all unpaged components in the primary ROFS because there is a restriction that all static dependencies (such as, for example, DLL functions, other executables, etc) of executable components in a core ROM image must also be present in that image, whether they are paged or unpaged. If a paged executable has a number of unpaged dependencies, then the RAM savings made by placing the paged executable in the core image may be offset by the RAM loss of having its unpaged dependencies in the core ROM image as well. This problem is referred to as the "coreIROFS split", and previously has been solved manually on a device by device basis. However, such an approach is time consuming, and does not in fact guarantee that an appropriate split is obtained that results in a RAM saving. A different approach to determining the "corefROFS split" i.e. which components should be included in the Core image and which in the ROFS is required, which can ensure that the RAM saving benefits of demand paging can be obtained.
Summary of the Invention
Embodiments of the present invention provide an improved methodology for determining which components of an operating system (or other software programs) need to be included in an area of a memory which is capable of being paged into RAM, and which components should be included in an area of memory from which only whole components at a time are read into RAM. More particularly, embodiments of the invention provide a methodology which makes a decision as to whether a software component should be placed in the pageable area of the memory in dependence on whether the software component itself is capable of being divided into memory pages (i.e. whether the component is "paged"). In some embodiments, as well as looking at the software component itself, the dependencies of the component (i.e. the other software components on which the first component relies for its operation) are also examined to determine if they are capable of being divided into memory pages, and if they are so capable then the component and the dependencies are included in the pageable area of the memory. If the dependencies are not capable of being paged (i.e. are "unpaged"), then the component and the dependencies may not be included in the pageable area of the memory.
In further embodiments, a "privileged set" of components is compiled of components which should always be included in the pageable area in any event, even if the components themselves are not paged. The decision as to whether a particular component should be placed in the pageable area of the memory is then made in dependence on whether the component and its dependencies are paged, and also in dependence on whether the dependencies are in the privileged set.
Using the embodiments the contents of a memory in terms of which software components should be stored in which part of the memory can be determined to ensure that the primary benefits of demand paging in terms of providing a RAM saving are obtained. Saving RAM in the device leads to a reduction in the component cost of the device.
In view of the above from a first aspect there is provided a method of allocating software components to a first part of a memory in a computing device and a second part of the memory in the device, comprising the steps, for a particular component:-determining if the software component is capable of being divided into memory pages for loading into and out of random access memory (RAM) of the computing device; storing the software component in the first part of the memory or the second part of the memory in dependence on the determination as to whether the component is capable of being divided into memory pages for loading into and out of RAM; wherein the first part of the memory is a part from which software components can be paged in pages from the memory into RAM for execution, and the second part of the memory is a part from which whole components are read into RAM for execution, without being paged.
Preferably, the software component is stored in the first part of the memory if the component is capable of being divided into memory pages for loading into and out of RAM.. This ensures that paged components, which are capable of being subjected to demand paging are stored in the part of the memory in which demand paging is performed, and hence the benefits of demand paging can be obtained.
Moreover, in embodiments the software component is stored in the first part of the memory if the component is a dependency of another component which is capable of being divided into memory pages for loading into and out of RAM. This ensure that the condition that unpaged dependencies of a paged component are also included in the part of the memory which is paged is met.
If the above conditions are not met, then preferably the software component is stored in the second part of the memory. This prevents the first part of the memory from becoming too large, hence allowing RAM savings to be made.
In another, preferred, embodiment the software component is stored in the first part of the memory or the second part of the memory in further dependence on the determination as to whether other software components which are dependencies of the component are capable of being divided into memory pages for loading into and out of RAM. More preferably, the software component is stored in the first part of the memory if it is capable itself of being divided into memory pages for loading into and out of RAM and the other software components which are dependencies of the component are also all capable of being divided into memory pages for loading into and out of RAM. These conditions ensure that only paged components which can be subject to demand paging are placed in the first part of the memory, and hence RAM is not wasted in storing unpaged components which are there simply because they are a dependency of a paged component.
Additionally, in the preferred embodiment the software component is stored in the first part of the memory if it is capable itself of being divided into memory pages for loading into and out of RAM and the other software components which are dependencies of the component are members of a predetermined privileged set of components. This recognises the fact that there are some unpaged components which are in any event stored in RAM almost all the time. If these components are the dependencies of a paged component, then that paged component should be included in the part of the memory which can be paged.
Moreover, in the preferred embodiment the software component is stored in the first part of the memory if it is a member of a predetermined privileged set of components.
Preferably the predetermined privileged set comprises those software components which during use of the computing device are in any event loaded into RAM. This recognises that if the component is in any event loaded into RAM during use then the component may as well be placed in the first part of the memory.
In one embodiment the components in the set are preferably those components which are loaded into RAM during one or more test use cases of the device. This allows actually usage of the device to be used to optimise which components should be stored where. In another embodiment the components in the set are those components which are loaded into RAM because they are kernel components of the computing device operating system. These are components which of necessity need to be loaded to allow the device to operate.
In embodiments the memory is of a type which is incapable of supporting eXecute-In-Place (XIP) operations, which is why the software components need to be loaded intoRAM for execution. Preferably the memory is NAND Flash memory, which is used in many modem devices because it provides large memory capacity at relatively less cost than other types of memory.
From another aspect, the present invention also provides a system for allocating software components to a first part of a memory in a computing device and a second part of the memory in the device, comprising: a processor arranged, for a particular software component, to: i) determine if the software component is capable of being divided into memory pages for loading into and out of random access memory (RAM) of the computing device; and ii) store the software component in the first part of the memory or the second part of the memory in dependence on the determination as to whether the component is capable of being divided into memory pages for loading into and out of RAM; wherein the first part of the memory is a part from which software components can be paged in pages from the memory into RAM for execution, and the second part of the memory is a part from which whole components are read into RAM for execution, without being paged.
The further features, advantages and the like described above in respect of the first aspect can also be obtained in respect of the second aspect.
From a third aspect, the invention also provides a memory having a first part from which software components can be paged in pages from the memory into a RAM of a computing device for execution, and a second part from which whole components are read into RAM for execution, without being paged, wherein the memory has stored in the first part and the second software components which have been stored in the first part or the second part using the method and system of any of the preceding claims. In embodiments of the invention the memory is of a type which is incapable of supporting eXecute-In-Place (XIP) operations, and preferably the memory is NAND Flash memory.
Brief Description of the Drawings
Further features and advantages of the present invention will become apparent from the following description of embodiments thereof, presented by way of example only, and with reference to the accompanying drawings, wherein like reference numerals refer to like parts, and wherein: -Figure 1 is a block diagram of a typical smartphone architecture of the prior art; Figure 2 is a diagram illustrating possible memory layouts which form the background to the present invention; Figure 3 is a diagram illustrating how paged data can be paged into RAM; Figure 4 is a diagram illustrating a paging cache; Figure 5 is a diagram illustrating how a new page can be added to the paging cache; Figure 6 is a diagram illustrating how pages can be aged within a paging cache; Figure 7 is a diagram illustrating how aged pages can be rejuvenated in a paging cache; Figure 8 is a diagram illustrating how a page can be paged out of the paging cache; Figure 9 is a diagram illustrating the RAM savings obtained using demand paging; Figure 10 is a flow diagram illustrating the steps performed in a method according to a first embodiment of the present invention; Figure 11 is a flow diagram illustrating steps performed in the first embodiment of the present invention; Figure 12 is a flow diagram of steps performed in a second embodiment of the present invention; Figure 13 is a flow diagram of steps performed in one of the steps of Figure 12 in the second embodiment of the present invention; Figure 14 is a flow diagram illustrating steps performed in one of the steps of Figure 12 in the second embodiment of the present invention; and Figure 15 is a diagram of a ROM which has had files allocated to it using the embodiments of the invention.
Overview of the Operation of the Embodiments Software components such as 0/S components or other components can be marked as paged or unpaged either by changing a flag in the header of the component (for executable components only) or adding a keyword to the instruction file that places files in ROM. The default behaviour of unmarked executable components can also be specified. Unmarked non-executable components will always be paged.
As mentioned previously, there is a problem if a paged component has a number of unpaged dependencies, as the RAM savings made by placing the paged executable in the core image may be offset by the RAM loss of having its unpaged dependencies in the core ROM image as well. Embodiments of the present invention to be described present several new strategies for handling the problem.
In a first embodiment, all paged executables and their dependencies (whether they are paged or unpaged) are placed in the core ROM image. Only unpaged dependencies that have no paged executables dependent on them are placed in the primary ROFS image.
This strategy does not attempt to limit the number of unpaged executables in the core ROM image.
In a second embodiment some unpaged components may in practice be used (and hence loaded into RAM) all or most of the time, irrespective of whether they are placed in the core ROM image or the primary ROFS image. These unpaged components are collected into a privileged set' of components that are placed in the core ROM image. All other unpaged components are placed in the primary ROFS image. The privileged set' may also contain unpaged executables that have a large number of paged executables dependent on them, where the cost of placing an unpaged executable in the core ROM image is outweighed by the benefit of having its paged dependencies in the core, resulting in a net RAM saving. Only those paged executables that have dependencies that are all paged or in the privileged set' are placed in the core ROM image. Other paged executables are placed in the primary ROFS image.
Detennining the privileged set' is done by creating an initial ROM with as many components in the primary ROFS as possible. The use-case that is to be optimised for is then executed and the system is interrogated for which components are loaded during the use-case. The privileged set' is the intersection of the list of loaded components and the list of unpaged components.
In a third embodiment an intermediate methodology is adopted. Here paged components which have all paged dependencies are placed in the core image, and unpaged components are placed in the ROFS, unless the component has to be in the core image, such as if it is a kernel component. Paged components are also placed in the ROFS if any of their dependencies are unpaged, again unless the component has to be in the core image for some other reason, such as it being a kernel component.
Description of the Preferred Embodiments
Several embodiments of the present invention will now be described below, but prior to this, several parts of the embodiments which are common to each of the embodiments will first be described.
Embodiments of the present invention are directed towards providing a method and system for deciding whether a particular software component to be loaded onto a device such as a smartphone or the like needs to be in the part of memory which is capable of being demand paged, or whether it should be in a different part of the memory which is not capable of being demand paged, such as, for example, in the case of the Symbian operating system, the Read-Only File System (ROFS). It should be noted that the embodiments of the present invention are particularly suitable for determining the memory contents in the Symbian operating system where the split is between the core image and the ROFS, as described previously, but in other embodiments of the present invention not described here, but covered by the appended claims, the software components may be components of any software system, and the invention is not limited to use with operating system software solely, or the Symbian operating system software in particular.
With reference to Figure 10, Figure 10 illustrates a flow diagram of a first embodiment of the present invention. However, steps 10.2 and 10.4 of Figure 10 are common to all of the described embodiments, and will be described first.
At step 10.2 first of all a determination as to which software components need to be present in the Read-Only Memory which is being built is performed. This is a very high level step, and involves compiling a list of all of the software components which are required to be installed onto the device. In the particular embodiment being described, we are concerned with which components of an operating system are to be installed on the device. It will be understood that step 10.2 will commonly be performed by the device design team. Some devices may require only a subset of components of a particular operating system, whereas other devices may require more components, or a different subset of components. This will depend very much upon the device's purpose and its required functionality.
The second step in the method, that of step 10.4, requires a determination for each software component to be installed as to whether the component is "paged" or "unpaged". A "paged" component is a component which is capable of being paged i.e. the code of the component can be read into RAM from where it can be executed in small blocks known as memory pages. As mentioned previously, in Symbian OS typically each page is 4kB in size. If a component is not capable of being split into pages for execution, then it is deemed to be "unpaged". Here, in order to be executed, the component must typically be loaded whole into RAM, from where it can then be executed.
It should be appreciated that the determination as to whether a particular component is capable of being paged, i.e. is "paged", or should not be paged i.e. must be loaded whole into RAM and therefore is "unpaged", is beyond the scope of the present description, as it very much depends on the particular component, and its purpose and function. Typically, lower level components, such as kernel components will not be capable of being paged. Often, some analysis and testing is required to determine whether a particular software component is capable of being paged, and this analysis can be either static analysis wherein properties of the component are analysed, or dynamic analysis making use of test cases of the device, and experimenting with the component in either paged or unpaged form. Again, such considerations are beyond the scope of the present application, but the determination as to whether a particular component should be paged or unpaged is within the realm of the person skilled in the art.
Once it has been determined which software components are required to be installed onto the device, and whether the components are paged or unpaged, the embodiments of the invention can then be used to determine which components should be placed in that part of the memory which is capable of being paged into RAM, and which components should be placed into that other part of the memory from which components are read whole into RAM. In the Symbian OS, the part of the memory from which components can be paged is referred to as the core image, whereas that part of the memory from which components are read whole is referred to as the Read-Only File System (ROFS), The core image and the ROFS together make the composite file system (CFS).
With reference to Figure 10, in the first embodiment at step 10.6 a determination of the core/ROFS split for each component is performed dependent on the paged status of the components, and the dependent components. Further details of this step are shown in the flow diagram of Figure 11, discussed later.
In Figure 10, once the corefROFS split determination has been performed for each component, at step 10.8 the core ROM image is built to include the components determined to be in the core, and correspondingly the ROFS is also built to contain the components determined to be in the ROFS. At step 10.10, which should typically be performed for each smartphone device, the core ROM image which is built at step 10.8 is stored in the NAND Flash, during the smartphone device manufacturing process.
Thus, it will be appreciated that steps 10.2 to 10.8 need to be performed during the device design process, whereas step 10.10 is performed during the device manufacturing process.
Turning now to Figure 11, the steps involved in the core/ROFS split determination of step 10.6 of the first embodiment will now be described in more detail. Prior to this description, it is worth recalling that software components for which the coreIROFS split determination is to be performed can be marked as either paged or unpaged either by changing a flag in the header of the component (for executable components only) or by adding a keyword to the instruction file (the OBEY file, or.OBY file) that places files in ROM. The default behaviour of unmarked executable components i.e. those components which have neither a paged nor unpaged marking can also be specified.
Unmarked non-executable components will always be paged.
With the above in mind, at step 11.2 the first step in the process of 10.6 is performed.
Here, an evaluation as to whether the present component which is being evaluated is an executable component is performed. If the component is not an executable, then processing proceeds to step 11.4, wherein the paged status of the component is evaluated. This evaluation is performed by looking at the instruction file (the OBY file) which determines the how paged status reads. If the paged status is that the component is paged, then at step 11.6 the component is placed into the core ROM image. If the paged status is that the component is unpaged, then at step 11.18 the component is placed in the primary ROFS image.
Returning to step 11.2, if it was determined here that the present component is in fact an executable, then at step 11.8 an evaluation is performed as to what is the default paging behaviour for executables. If this comes out that all executables are always to be paged, then processing proceeds to step 11.6, wherein the component is placed in the core ROM image. Conversely, if the default executable paging behaviour is that executables are never to be paged, then processing proceeds to step 11.18, wherein the component is placed in the primary ROFS image.
On the other hand, if the default executable paging behaviour is neither always paged or never paged, then it is instead dependent on the particular markings for that component, i.e. either paged or unpaged, and processing proceeds to the evaluation of step 11.10, wherein these markings are examined.
More particularly, at step 11.10 the paged status of the component in the OBY file is examined. If there is no marking for this component, then processing proceeds to step 11.12. However, if the marking is such that the component is marked as paged, then processing proceeds to step 11.6, and the component is placed in the core ROM image.
If the component marking is "unpaged", then processing proceeds to a second evaluation step of step 11.16, wherein it is determined whether the component has any paged components dependent upon it. If the answer to this is positive i.e. the component does have paged components dependent upon it, then even though the component itself is unpaged, it is placed, at step 11.6, in the core ROM image. The reason for this is to maintain the system criterion that a paged component which is present in the core ROM image must also have its dependencies present in the core ROM image, even if those dependencies are not themselves paged. If, at step 11.16 the component is determined not to have any paged components dependent upon it, then there is no need for the unpaged component to be placed in the core ROM image, and at step 11.18 it can instead be placed in the primary ROFS image.
Returning to step 11.10, as mentioned, if the page status in the OBY file is unmarked, then processing proceeds to step 11.12, wherein the executable header is examined to determine if that contains a marking as to the paged status. If here there is a marking that the component is paged, then processing proceeds to step 11.16, wherein the component is placed in the core ROM image. If the marking is such that the component is unpaged, then processing proceeds to step 11.16, wherein an evaluation is performed as to whether the component has paged dependencies dependent upon it. If yes, then the component is placed in the core ROM image at step 11.6, for the same reasons as previously. If no, then the component is placed in the primary ROFS image, at step 11.18.
If the evaluation of step 11.12 indicates that the executable header has no page markings, then processing proceeds to step 11.14. Here the default behaviour for unmarked executables is performed. If the default behaviour for unmarked executables is to page the executable, then the component is placed in the core ROM image at step 11.6, otherwise, if the default behaviour is to have unmarked executables unpaged, then the evaluation of step 11.16 is again performed. Here, if the component has paged components dependent upon it then the component is placed in the core ROM image, whereas if the component has no paged components dependent upon it then the component is placed in the primary ROFS image.
The above processing is performed in turn for every component, to determine whether the component should be placed in the core ROM image, or the primary ROFS image.
Once every component has been processed, at the end of the procedure, both the contents of the core ROM image and the primary ROFS image have been obtained i.e. the ROM contents have been built. During a subsequent device manufacturing process, therefore, the ROM contents output from this method can be loaded into the NAND Flash in the device.
With the first embodiment, therefore, paged components are placed in the core ROM image, together with their dependencies, whether the dependent components are paged or not. If a component is unpaged, and has no paged dependencies, then it is placed in the primary ROFS image. By placing paged components in the core ROM image, then the benefits of demand paging can be obtained for those components for which demand paging is suitable. In most cases a RAM saving will be obtained by doing this, however whether a RAM saving is obtained in a particular case will depend upon the size of the unpaged dependent components which have also to be included in the core ROM image. However, generally a RAM saving is obtained, unless any one of the unpaged dependent components is particularly large.
To try and address the above problem, however, in a second embodiment to be described next, further processing is performed to determine whether or not unpaged components are in fact in a privileged set of components that should be in the core ROM image anyway. The second embodiment is based upon the realisation that some unpaged components may in practice be used (and hence loaded into RAM) all or most of the time, irrespective of whether they are placed in a core ROM image, or the primary ROFS image. If this is the case i.e. that such components are in any event loaded into RAM all or most of the time, then those components may as well be placed into the core ROM image. However, other unpaged components are placed into the ROFS image, and even if there are paged components which are dependent on such unpaged components, then the paged components are placed into the ROFS image with the unpaged components on which they are dependent. The processing performed by the second embodiment is shown in Figures 12 to 14.
More particularly, with reference to Figure 12, first of all at step 12.2 the components which are to be loaded onto the device are determined, and at step 12.4 a determination is made as to whether each component which is to be loaded is "paged", or "unpaged".
The factors considered in these determinations are the same as for steps 10.2 and steps 10.4, discussed previously, and hence no further discussion will be undertaken.
Once the components to be loaded have been determined, and whether each component is paged or unpaged, within the second embodiment, at step 12.6 a "privileged set" of unpaged components is first found. Figure 13 shows the procedure to be performed to build the privileged set of unpaged components.
With reference to Figure 13, firstly, at step 13.2 a ROM is created with most components (other than components for the kernel) placed in the ROFS, in a manner similar to that shown previously in layout A of Figure 2, This RUM is then installed on a test device. At step 13.4 the device is booted, and a particular use scenario is run on the device. For example, in the case of a smartphone, the use scenario may be performing a call, sending an email, or the like.
While the use scenario is being performed, at step 13.6 the loading of the software components into RAM is monitored, and a list is compiled of which components are loaded into RAM during the use test. Once the use test is over, at step 13.8 the list can be examined, and a second list compiled of which components of those which were loaded into RAM were in fact unpaged components. At step 13.10 the unpaged components which were loaded into RAM during the use test are recorded as members of the privileged set. Thus, the privileged set comprises a list of software component names, which are all unpaged components, but which were loaded into RAM during the use test scenario.
It should be understood that several use tests testing different scenarios can be performed, such that the privileged set contains the names of those unpaged components which are loaded into RAM during several different uses 3f the device. Of course, whether there are in fact several uses of the device will depend upon the device itself For example, an MP3 player which only plays stored MP3 files may not have any further uses. However, an MP3 player which also has an in-built radio may have the additional radio use. However many use scenarios there are for a particular device will depend upon the type of device itself.
Returning to Figure 12, after the privileged set of components has been compiled, at step 12.8 the coreIROFS split determination is performed for each component. This is performed in dependence on the paged status of the component itself as well as its dependencies, and also whether the dependent components are a member of the privileged set. The core(ROFS split determination is repeated for each component which is to be installed on the device. At the end of step 12.8, therefore, there will have been obtained a core ROM image containing those components which are to be in the core image, and a primary ROFS image, containing those components which are to be in the primary ROFS. At step 12.10 therefore, the core ROM image can be built, as well as the primary ROFS image. Finally, at step 12.12, during the manufacturing process for a device, the core ROM image and primary ROFS image obtained throughout step 12.8 and step 12.10 can be stored on the NAND Flash in the device.
Further details of the steps performed during step 12.8 will now be described with respect to Figure 14.
More particularly, Figure 14 shows the steps performed during the coreIROFS split determination of step 12.8. The procedure shown in Figure 14 is repeated for each component for which the core/ROFS split determination needs to be made.
Referring to Figure 14, for the component for which the determination is presently being made, a first evaluation is performed at step 14.2 as to whether the component is an executable component. If the component is not an executable component, then at step 14.4 the paged status of the component in the OBY file is examined. If the status is that the component is "paged" then at step 14.6 the component is placed in the core ROM image, whereas otherwise if the status is "unpaged", then at step 14.14 the component is placed in the primary ROFS image.
If at step 14.2 it is determined that the component is an executable, then processing proceeds to a second evaluation at step 14.8, wherein the default paging behaviour for executables is examined. If the default paging behaviour for executables is that all executables should always be paged, then of course the component must be placed in the core ROM image, at step 14.6. However, if the default paging behaviour for executables is that executables should never be paged, then a component should be placed in the primary ROFS image, at step 14.14. If, however, there is no such default paging behaviour for all executables specified, then processing proceeds to step 14.10, wherein a second evaluation is performed on the particular paged or unpaged marking of the particular component. In particular, the paged status of the particular component is examined in the OBY file. If the paged status is that the component is paged, then processing proceeds to a further evaluation, at step 14.12. This is an evaluation as to whether all of the components dependences are paged, or whether its dependences are in the privileged set. If this evaluation returns positive, then this means that not only is the component itself paged, but that its dependencies are all paged, or are unpaged but are in the privileged set of unpaged components which will in any event be placed in the core ROM image. If this is the case, then the component is clearly suitable for paging, together with its dependencies, and hence is placed in the core ROM image at step 14.6.
However, if the paged status in the OBY file is that the component is unpaged, then processing proceeds to the evaluation of step 14.20. Here an evaluation is performed as to whether the component is listed in the privileged set of unpaged components, which in any event should be placed in the core ROM image to be loaded into RAM. If this is the case i.e. the component is in the privileged set, then processing proceeds to step 4.6, wherein the component is placed in the core RUM image. If this is not the case, i.e. the component is unpaged, and is not in the privileged set, then the component is placed in the primary ROFS image, at step 14.14.
If there is no paged status in the OBY file for the component at step 14.10 i.e. the component is unmarked, then processing proceeds to step 14.16, wherein the header of the executable is examined to determine whether there is a paged or unpaged marking in the header. If the executable header indicates that the component is a paged component, then processing proceeds to step 14.12, wherein the evaluation is performed as to whether all of the components dependencies are paged or whether the dependencies are in the privileged set. The reason for this is as described previously in that a paged executable is only placed in the core ROM image if its dependencies will also be placed in the core RUM image i.e. they are either paged themselves, or in the privileged set of unpaged components which are placed in the core ROM image in any event. If this is the case i.e. the evaluation of step 14. 12 returns a positive, then the component is placed in the core RUM image at step 14.6. However, if this is not the case i.e. not all of the component's dependencies are paged and neither are they in the privileged set, then the component is placed in the primary ROFS image at step 14.14.
If at step 14.16 it is determined that the executable header does not have a marking as to whether the executable is paged or unpaged, then processing proceeds to a final evaluation at step 14.18.
At step 14.18 the default executable paging behaviour for the executable is examined.
If this is that the executable should be paged, then processing proceeds to step 14.12, wherein the paged status of the component's dependencies, or whether the dependencies are in the privileged set is examined. If all of the component's dependencies are either paged, or they are all in the privileged set, then the component itself can be placed in the core RUM image, at step 14.6. Conversely, if the component's dependencies are not paged or all in the privileged set, then the component is placed in the primary ROFS image, at step 14.14.
If the default behaviour for the executable is that it is unpaged, as evaluated at step 14.18, then a second evaluation is performed at step 14.20 to determine whether the component is in the privileged set of unpaged components which in any event need to be placed in the core ROM image. If this is the case, then the component is placed in the core RUM image at step 14.6. If this is not the case, then the component is placed in the primary ROFS image, at step 14.14.
With the above method, therefore, a RUM is obtained which contains those paged components whose dependencies are all either paged or in the privileged set in the core RUM image, and with all other components in the primary ROFS image. In this way, the core ROM image contains either those components which are in any event almost always loaded into RAM in any case, together with those components which are capable of being paged. Hence, the benefits of demand paging in terms of RAM savings can be obtained.
Various modifications may be made to the above described embodiments to provide further embodiments. For example, in the second embodiment, the privileged set was determined in dependence upon whether the unpaged components in the privileged set were in any event loaded into RAM during one or more test use cases. Thus, to determine the privileged set it was necessary to test the device using the use cases in advance.
In a further embodiment, however, the privileged set can be determined in a different way, and in particular based upon whether the components form part of the operating system kernel or not. If a component is a kernel component, then it will almost always be loaded into RAM irrespective of the use case. Thus, a privileged set can be compiled dependent on whether the component is a kernel component. The same procedure as Figure 14 can then be used, but with the different privileged set. This would result in page components which have dependencies all of which are paged being placed in the core ROM image, but paged components which have unpaged dependencies which are not in the privileged set would be placed in the ROFS image. Unpaged components would automatically be placed in the ROFS image, unless they were kernel components, and hence in the privileged set.
Using the embodiments of the invention therefore results in a ROM image being built which can be stored in NAND flash memory, which contains a core ROM image with those components which have been determined to be in the core ROM image so as to be suitable for demand paging, and a primary ROFS image containing those components which will not be demand paged. Figure 15 shows such a ROM, which is then stored in a device in NAND Flash. Because the core ROM image contains a large amount of paged data the XIP ROM image in RAM on the device is commonly much smaller, as shown previously in Figure 3, and hence significant RAM savings can be made.
Various further modifications and additions will be apparent to those skilled in the art to produce further embodiments of the present invention, any and all of which are intended to be encompassed by the appended claims.
Claims (18)
- Claims 1. A method of allocating software components to a first part of a memory in a computing device and a second part of the memory in the device, comprising the steps, for a particular component:-determining if the software component is capable of being divided into memory pages for loading into and out of random access memory (RAM) of the computing device; storing the software component in the first part of the memory or the second part of the memory in dependence on the determination as to whether the component is capable of being divided into memory pages for loading into and out of RAM; wherein the first part of the memory is a part from which software components can be paged in pages from the memory into RAM for execution, and the second part of the memory is a part from which whole components are read into RAM for execution, without being paged.
- 2. A method according to claim 1, wherein the software component is stored in the first part of the memory if the component is capable of being divided into memory pages for loading into and out of RAM.
- 3. A method according to claim 2, wherein the software component is stored in the first part of the memory if the component is a dependency of another component which is capable of being divided into memory pages for loading into and out of RAM.
- 4. A method according to claim 3, wherein otherwise the software component is stored in the second part of the memory.
- 5. A method according to claim 1, wherein the software component is stored in the first part of the memory or the second part of the memory in further dependence on the determination as to whether other software components which are dependencies of the component are capable of being divided into memory pages for loading into and out ofRAM
- 6. A method according to claim 5, wherein the software component is stored in the first part of the memory if it is capable itself of being divided into memory pages for loading into and out of RAM and the other software components which are dependencies of the component are also all capable of being divided into memory pages for loading into and out of RAM.
- 7. A method according to claim 1, 5, or 6, wherein the software component is stored in the first part of the memory if it is capable itself of being divided into memory pages for loading into and out of RAM and the other software components which are dependencies of the component are members of a predetermined privileged set of components.
- 8. A method according to any of the preceding claims, wherein the software component is stored in the first part of the memory if it is a member of a predetermined privileged set of components.
- 9. A method according to claims 7 or 8, wherein the predetermined privileged set comprises those software components which during use of the computing device are in any event loaded into RAM.
- 10. A method according to claim 9, wherein the components in the set are those components which are loaded into RAM during one or more test use cases of the device.
- 11. A method according to claim 9, wherein the components in the set are those components which are loaded into RAM because they are kernel components of the computing device operating system.
- 12. A method according to any of the preceding claims, wherein the memory is of a type which is incapable of supporting eXecute-In-Place (XIP) operations.
- 13. A method according to claim 12, wherein the memory is NAND Flash memory.
- 14. A system for allocating software components to a first part of a memory in a computing device and a second part of the memory in the device, comprising:- a processor arranged, for a particular software component, to:-i) determine if the software component is capable of being divided into memory pages for loading into and out of random access memory (RAM) of the computing device; and ii) store the software component in the first part of the memory or the second part of the memory in dependence on the determination as to whether the component is capable of being divided into memory pages for loading into and out of RAM; wherein the first part of the memory is a part from which software components can be paged in pages from the memory into RAM for execution, and the second part of the memory is a part from which whole components are read into RAM for execution, without being paged.
- 15. A memory having a first part from which software components can be paged in pages from the memory into a RAM of a computing device for execution, and a second part from which whole components are read into RAM for execution, without being paged, wherein the memory has stored in the first part and the second software components which have been stored in the first part or the second part using the method and system of any of the preceding claims.
- 16. A memory according to claim 15, wherein the memory is of a type which is incapable of supporting eXecute-In-Place (XIP) operations.
- 17. A memory according to claim 16, wherein the memory is NAND Flash memory.
- 18. A method substantially as hereinbefore described with reference to any of Figures 10 to 14.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0809954A GB2460636A (en) | 2008-05-30 | 2008-05-30 | Storing operating-system components in paged or unpaged parts of memory |
PCT/FI2009/050464 WO2009144386A1 (en) | 2008-05-30 | 2009-06-01 | Method and apparatus for storing software components in memory |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0809954A GB2460636A (en) | 2008-05-30 | 2008-05-30 | Storing operating-system components in paged or unpaged parts of memory |
Publications (2)
Publication Number | Publication Date |
---|---|
GB0809954D0 GB0809954D0 (en) | 2008-07-09 |
GB2460636A true GB2460636A (en) | 2009-12-09 |
Family
ID=39637948
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB0809954A Withdrawn GB2460636A (en) | 2008-05-30 | 2008-05-30 | Storing operating-system components in paged or unpaged parts of memory |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB2460636A (en) |
WO (1) | WO2009144386A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220269513A1 (en) * | 2021-02-19 | 2022-08-25 | Macronix International Co., Ltd. | Serial NAND Flash With XIP Capability |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5339406A (en) * | 1992-04-03 | 1994-08-16 | Sun Microsystems, Inc. | Reconstructing symbol definitions of a dynamically configurable operating system defined at the time of a system crash |
WO1999024905A1 (en) * | 1997-11-12 | 1999-05-20 | Intergraph Corporation | Apparatus and method of accessing random access memory |
US6332172B1 (en) * | 1998-05-29 | 2001-12-18 | Cisco Technology, Inc. | Method and system for virtual memory compression in an embedded system |
GB2423843A (en) * | 2005-03-02 | 2006-09-06 | Symbian Software Ltd | Providing real time performance with memory paging by providing a real time and a non-real time version of the operating system. |
US20080022033A1 (en) * | 2006-07-19 | 2008-01-24 | International Business Machines Corporation | Boot read-only memory (rom) configuration optimization |
US20080104358A1 (en) * | 1997-11-12 | 2008-05-01 | Karen Lee Noel | Managing physical memory in a virtual memory computer |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5754817A (en) * | 1994-09-29 | 1998-05-19 | Intel Corporation | Execution in place of a file stored non-contiguously in a non-volatile memory |
US6349355B1 (en) * | 1997-02-06 | 2002-02-19 | Microsoft Corporation | Sharing executable modules between user and kernel threads |
GB2404748B (en) * | 2003-08-01 | 2006-10-04 | Symbian Ltd | Computing device and method |
KR100755701B1 (en) * | 2005-12-27 | 2007-09-05 | 삼성전자주식회사 | Apparatus and method of demanding paging for embedded system |
US7512767B2 (en) * | 2006-01-04 | 2009-03-31 | Sony Ericsson Mobile Communications Ab | Data compression method for supporting virtual memory management in a demand paging system |
-
2008
- 2008-05-30 GB GB0809954A patent/GB2460636A/en not_active Withdrawn
-
2009
- 2009-06-01 WO PCT/FI2009/050464 patent/WO2009144386A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5339406A (en) * | 1992-04-03 | 1994-08-16 | Sun Microsystems, Inc. | Reconstructing symbol definitions of a dynamically configurable operating system defined at the time of a system crash |
WO1999024905A1 (en) * | 1997-11-12 | 1999-05-20 | Intergraph Corporation | Apparatus and method of accessing random access memory |
US20080104358A1 (en) * | 1997-11-12 | 2008-05-01 | Karen Lee Noel | Managing physical memory in a virtual memory computer |
US6332172B1 (en) * | 1998-05-29 | 2001-12-18 | Cisco Technology, Inc. | Method and system for virtual memory compression in an embedded system |
GB2423843A (en) * | 2005-03-02 | 2006-09-06 | Symbian Software Ltd | Providing real time performance with memory paging by providing a real time and a non-real time version of the operating system. |
US20080022033A1 (en) * | 2006-07-19 | 2008-01-24 | International Business Machines Corporation | Boot read-only memory (rom) configuration optimization |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220269513A1 (en) * | 2021-02-19 | 2022-08-25 | Macronix International Co., Ltd. | Serial NAND Flash With XIP Capability |
US11640308B2 (en) * | 2021-02-19 | 2023-05-02 | Macronix International Co., Ltd. | Serial NAND flash with XiP capability |
US12086615B2 (en) | 2021-02-19 | 2024-09-10 | Macronix International Co., Ltd. | Serial NAND flash with XIP capability |
Also Published As
Publication number | Publication date |
---|---|
GB0809954D0 (en) | 2008-07-09 |
WO2009144386A1 (en) | 2009-12-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10157268B2 (en) | Return flow guard using control stack identified by processor register | |
US9021243B2 (en) | Method for increasing free memory amount of main memory and computer therefore | |
JP5422652B2 (en) | Avoiding self-eviction due to dynamic memory allocation in flash memory storage | |
US20080046713A1 (en) | Overriding processor configuration settings | |
US20070118838A1 (en) | Task execution controller, task execution control method, and program | |
US20090172332A1 (en) | Information processing apparatus and method of updating stack pointer | |
US10789184B2 (en) | Vehicle control device | |
CN111427804B (en) | Method for reducing missing page interruption times, storage medium and intelligent terminal | |
US8789169B2 (en) | Microcomputer having a protection function in a register | |
US10346234B2 (en) | Information processing system including physical memory, flag storage unit, recording device and saving device, information processing apparatus, information processing method, and computer-readable non-transitory storage medium | |
US9037773B2 (en) | Methods for processing and addressing data between volatile memory and non-volatile memory in an electronic apparatus | |
JP2008532163A (en) | Computer device and method of operation paged in real time | |
JP2015035007A (en) | Computer, control program, and dump control method | |
JP2008532163A5 (en) | ||
GB2460636A (en) | Storing operating-system components in paged or unpaged parts of memory | |
KR100994723B1 (en) | selective suspend resume method of reducing initial driving time in system, and computer readable medium thereof | |
GB2461499A (en) | Loading software stored in two areas into RAM, the software in a first area is loaded whole and from a second it is demand paged loaded. | |
CN113569231B (en) | Multiprocess MPU protection method and device and electronic equipment | |
CN112654965A (en) | External paging and swapping of dynamic modules | |
KR101118111B1 (en) | Mobile communication terminal and booting method thereof | |
TWI760756B (en) | A system operative to share code and a method for code sharing | |
US20050027954A1 (en) | Method and apparatus to support the maintenance and reduction of FLASH utilization as it pertains to unused or infrequently referenced FLASH data | |
GB2460464A (en) | Memory paging control method using two cache parts, each maintained using a FIFO algorithm | |
GB2460462A (en) | Method for loading software components into RAM by modifying the software part to be loaded based on the memory location to be used. | |
KR101342074B1 (en) | Computer system and control method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
COOA | Change in applicant's name or ownership of the application |
Owner name: NOKIA CORPORATION Free format text: FORMER OWNER: SYMBIAN SOFTWARE LTD |
|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |