MIXED-MODE CODE GENERATION AND EXECUTION
BACKGROUND
With most programming languages, you either compile or interpret a program to run the program on a specific platform. A platform is a computer environment that is usually described as a combination of an operating system and hardware. When a program is executed, it can be referred to as being run on a platform, since programs must be run in a computer environment. Some well-known platforms include Windows 2000, Linux, Solaris, and MacOS.
The Java® language is unusual in that it is both compiled and interpreted. A Java compiler translates Java language source (.Java) files into an intermediate language called Java bytecodes. The compiler places the Java bytecodes into class (.class) files, for execution on a Java platform. A Java interpreter then interprets the Java bytecodes to run the program. Java bytecodes are referred to as platform-independent because they can be run without modification on any hardware-based platform having a compatible Java platform on top of it. Various techniques are used to reduce the translation overhead of the interpreter and otherwise utilize resources efficiently or improve performance.
The Java platform is different from most platforms in that it is typically a software-only platform that runs on top of other hardware-based platforms. Thus, the Java platform itself is "platform-independent" with respect to hardware-based platforms. Since the Java platform insulates, for example, Java bytecodes from a specific hardware platform, the Java program (even after compilation into Java bytecodes) may have execution times that are worse than those of native code. To ameliorate the problem of relatively slow Java program execution, many techniques and inventions have been utilized, including smart compilers, well-tuned interpreters, and just-in-time (JIT) bytecode compilers.
Code that is compiled for execution on a specific hardware platform is sometimes referred to as native code. Unlike native code, Java bytecodes are platform-independent, which
means a Java program can be compiled on any platform that has a Java compiler. Compilation occurs only once, but interpretation happens each time the program is executed. Programmers occasionally refer to this platform-independent compilation (and subsequent interpretation on a computer with an appropriate interpreter) as "write once, run anywhere" code. This type of code is considered useful by software developers because applications can be developed quickly and delivered to users on multiple platforms. One challenge with this type of code is reducing the translation overhead when interpreting Java bytecodes (e.g., translating the Java bytecodes into native code).
Java programs are sometimes referred to as applets or applications. An applet is a Java program that can be run on a Java-enabled browser. An application is a stand-alone program that runs on the Java platform. The Java platform typically includes a Java Virtual Machine (JVM) and a Java Application Programming Interface (JAPI). The JAPI include software components that are grouped into libraries known as packages. The JVM is a base for the Java platform that is ported onto various hardware-based platforms.
The Java interpreter is an implementation of the JVM. Java bytecodes are instructions, consisting of a one-byte opcode and some optional operands, for the JVM. The opcode instructs the JVM how to act and the JVM checks the operands for additional information, if necessary, and carries out the instruction.
The JVM has "virtual hardware" that includes registers, a stack, a garbage-collection heap, and a method area. JVM address size is 32 bits, typically allowing the JVM to address up to 4 gigabytes of memory, with each memory location containing one byte. The stack, the garbage-collection heap, and the method area are located within the addressable memory, but their exact locations may be implementation-dependent. The stack and garbage-collection heap are aligned on word (e.g., 32-bit) boundaries, while the method area is aligned on byte boundaries within the addressable memory.
The JVM includes a program counter (also referred to as a PC register) and three registers (sometimes referred to as the vars register, frame register, and optop register) that manage the stack. The PC register keeps track of the memory location from which the JVM
should be executing instructions. The other three registers point to various parts of a stack frame, described later.
The method area, because it contains Java bytecodes, is aligned on byte boundaries within the addressable memory. The PC register points to a Java bytecode. After the JVM carries out an instruction associated with the Java bytecode to which the PC register points, the PC register points to the next Java bytecode. (Next may or may not mean the next sequential bytecode in memory.) Instructions associated with Java bytecodes operate primarily on the stack. This design helps keep the instruction set associated with the JVM, and the implementation of the JVM, small.
The stack stores parameters for and results of Java bytecodes. The JVM passes the parameters to and returns values from methods. The stack is also used to keep track of the state of each method invocation. The state of a method invocation is called a stack frame, which holds the state (e.g., local variables, execution environment, and operand stack) for an invocation of a method. The vars, frame, and optop registers respectively point to a local variables section, an execution environment section, and an operand stack section (which is used as a workspace by the JVM when interpreting Java bytecodes) of the current stack frame.
The garbage-collection heap includes the objects of a Java program. The runtime environment keeps track of references to each object and frees the memory associated with the objects when they are no longer referenced. This is called garbage collection.
Java processors have been built that process Java bytecodes directly. Although this reduces portability, it can meet certain demands. For example, a standalone Java chip design can enable capabilities similar to those of a general purpose computer. The processor of a Java chip executes Java bytecodes directly. Most current processors execute a subset of the full Java instruction set. Alternatively, a core, which is a circuit added to, for example, application- specific integrated circuit/system-on-a-chip (ASIC/SoC) devices to provide Java hardware processing capability. Java hardware can provide advantages when working with limited memory and power resources.
An example of an acceleration device is Jazelle® Java hardware accelerator, which is popular for mobile Java applications. The Jazelle hardware accelerator can run on two different instruction sets, ARM and Thumb. Thumb is a compressed instruction set. ARM produces microprocessor cores, such as the ARM7EJ-S, ARMl 136J(F)-S, and ARMl 176JZ(F)-S that are both Jazelle- and Thumb- compatible.
Interpreting Java bytecodes using a Jazelle hardware accelerator can be referred to as operating in "Jazelle mode." In Jazelle mode, the efficient hardware interpreter handles most of the Java bytecodes, leaving complex or unknown Java bytecodes to be handled by support software residing within the JVM. The Jazelle processor may cache Java bytecodes in an instruction cache, resulting in good cache density.
While Jazelle mode works reasonably well for mobile computing devices, sophisticated hardware acceleration may not always work well on small platforms with limited resources. For example, there may not be enough room for both a JVM and a program to run, since a JVM can easily take up a megabyte of RAM, while available memory may be significantly less. As Java platforms acquire more features, this problem may persist even as memories grow larger. Jazelle addresses this problem by using the ARM or Thumb instruction sets and skipping Java bytecodes it does not know how to handle.
A technique that may be used to ensure no Java bytecodes are skipped is ahead-of-time (AOT) compilation. AOT compilers can interpret Java bytecodes in advance, converting the Java bytecode into native code.
For example, a compiler could translate Java bytecodes into native code, such as ARM (or Thumb) instructions. Translating the Java bytecodes into ARM instructions can be referred to as operating in "ARM mode." This technique may remove translation overheads and, using optimization techniques, can further remove redundancy present in the Java bytecodes. ARM mode can be more efficient, CPU cycle-wise, than Jazelle mode, due to the additional analysis and optimizations applied. However, ARM instructions typically consume more space, and more instruction-cache, than the Java bytecodes, making them substantially less efficient in this regard. This can lead to pressure in the instruction cache, causing cache displacement as the
working set of the Java application and the libraries used exceed the storage available within the cache. Compared to Jazelle mode, cache density is relatively poor in ARM mode.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention are illustrated in the figures. However, the embodiments and figures are illustrative rather than limiting. The figures provide examples of the invention.
FIG. 1 depicts an example of a system for generating and executing mixed mode code.
FIG. 2 depicts an example of a system for generating and executing mixed mode code according to another embodiment.
FIG. 3 depicts an example of a system for generating and executing mixed mode code according to yet another embodiment.
FIG. 4 depicts a flowchart of a process for providing a mixed-mode instruction stream.
FIG. 5 depicts an example of a system for generating and executing mixed mode code according to an alternative embodiment.
FIG. 6 depicts an example of a device for generating mixed-mode code.
FIG. 7 depicts an example of a device for executing mixed-mode code.
FIG. 8 depicts an example of a device for executing mixed-mode code according to another embodiment.
FIG. 9 depicts a networked system for use in an embodiment.
FIG. 10 depicts a computer system for use in the system of FIG. 9.
DETAILED DESCRIPTION
FIG. 1 depicts an example of a system 100 for generating and executing mixed mode code. The system 100 includes a mixed-mode compiler 102 and a mixed-mode interpreter 104. The mixed-mode compiler 102 converts Java source code into mixed-mode code that includes Java bytecode and native code. The mixed-mode interpreter 104 produces native code from the mixed-mode code.
In the example of FIG. 1, the mixed-mode compiler 102 generates a mix of Java bytecode, similar to the Java bytecode that a Java compiler might produce, and native instructions, such as ARM or Thumb instructions. In an embodiment, the mixed-mode compiler 102 creates Java bytecodes for instructions that are part of an instruction set. The instruction set may include, for example, only those instructions that are part of the Thumb instruction set. The mixed-mode compiler 102 converts instructions that are not included in the instruction set into native code instead.
The mixed-mode compiler 102 can present a more optimal instruction stream to the mixed-mode interpreter than a Java compiler can provide. For instance, a Java compiler would simply provide Java bytecode, all of which must eventually be translated into native code when a Java method associated with the Java bytecode is executed.
The mixed-mode interpreter 104 translates the Java bytecode from the mixed-mode code into native code. The mixed-mode interpreter 104 produces the translated Java bytecode together with the native code from the mixed mode code as fully converted native code.
The mixed-mode interpreter 104 can have greater instruction cache density when executing a method associated with the mixed-mode code in contrast to a method compiled purely into native code.
The advantages of the system 100 may become increasingly significant as the working set of Java applications and Java libraries grows. This will almost certainly occur as the Java environment becomes more feature-capable. Using mixed-mode code, larger working sets can
be accommodated without substantially increasing instruction cache sizes, resulting in less external memory bus-cycles, which can save both energy and time.
FIG. 2 depicts an example of a system 110 for generating and executing mixed mode code according to another embodiment. The system 110 is similar to the system 100 (FIG. 1), but includes additional components. The additional components include a discrimination module 112, an optimization engine 114, and a hardware accelerator 116.
In the example of FIG. 2, the mixed-mode compiler 102 generates a mix of Javabytecode and native instructions. In an embodiment, the mixed-mode compiler 102 creates Java bytecodes for instructions that are part of an instruction set. The instruction set may include, for example, only those instructions that are compatible with the hardware accelerator 116. For example, if the hardware accelerator 116 would recognize an instruction to push a zero onto a stack, then mixed-mode compiler may create a Java bytecode having the value of, for example, '60' (i.e., a bytecode value associated with pushing a zero onto a stack in an exemplary Java application).
In an embodiment, the mixed-mode compiler 102 creates native code for instructions that are not part of the instruction set. Similar to the example given in the preceding paragraph, if the hardware accelerator 116 would not recognize an instruction such as, for example, a floatingpoint instruction, then the mixed-mode compiler 102 converts the floating point instruction to native code.
In an embodiment, the discrimination module 112 distinguishes between the instructions that are part of the instruction set and those that are not. The discrimination module 112 may include a database of instructions that are compatible with the hardware accelerator 116. If the hardware accelerator 116 is compatible with a predetermined set of instructions, the mixed-mode compiler 102 is said to be aware of such instructions when the discriminator module 112 is capable of distinguishing between instructions that are part of the predetermined set, and instructions that are not. For example, if the discriminator module 112 can distinguish between instructions that are compatible with Jazelle hardware accelerators, the mixed-mode compiler may be referred to as "Jazelle-aware."
The optimization engine 114 may provide certain optimizations to the native code. Indeed, one of the advantages of compiling instructions as native code is that the instructions can be optimized for execution on a platform. The optimization engine 114 may analyze all of the instructions that are being produced as mixed-mode code, including those instructions that are created as Java bytecodes, and optimize the native code accordingly. For example, instructions associated with a Java method may include some instructions that will be compiled into Java bytecode and some instructions that will be compiled into native code. The optimization engine 114 may use all of the instructions associated with the Java method to optimize the native code.
Even a small increase in the time it takes to optimize may make some optimization undesirable in a just-in-time (JIT) system. In an embodiment, the optimization engine 114 is included in an ahead-of-time (AOT) compiler, which allows for optimization that might be time- intensive or resource-intensive on, for example, a device having limited resources. The native code can be highly optimized.
In addition to optimizing the native code, the optimization engine 114 may reduce redundancy in the Java bytecode, as well. For example, the optimization engine 114 may determine, based on an analysis of the Java source code, that certain instructions are redundant. In this example, the optimization engine 112 may optimize even those instructions that are produced as Java bytecode.
The mixed-mode interpreter 104 translates the Java bytecode from the mixed-mode code into native code. In an embodiment, the Java bytecode includes instructions that are compatible with the hardware accelerator 116. Accordingly, the hardware accelerator 116 may be able to interpret the Java bytecode with great efficiency and speed. Interpreting Java bytecodes often results in good instruction cache density, which is particularly valuable on platforms with limited resources, such as cell phones. Further, hardware acceleration, as the name implies, results in better performance.
The mixed-mode interpreter 104 produces the translated Java bytecode together with the optimized native code from the mixed mode code as fully converted native code. Since the optimized native code was compiled ahead-of-time, translation overheads that may have
otherwise been associated with the optimized native code are avoided. Also, executing the optimized native code is typically less resource intensive (particularly with respect to CPU cycles), due to the additional analysis and optimizations applied. Although native code typically consumes more space and more instruction cache than Java bytecodes, the use of both optimized native code and Java bytecode that is compatible with the hardware accelerator 116 is predicted to be, in some cases, the best of both worlds.
FIG. 3 depicts another example of a system 111 for generating and executing mixed mode code according to another embodiment. System 111 is similar to the system 110 (FIG. 2). The mixed-mode compiler 102, with the help of the discriminator module 112, selects the Java source code that correspond to instructions that are compatible with hardware accelerator 116 and converts the selected Java source code for compiling into Java bytecode. The mixed-mode compiler sends the Java bytecode directly to the hardware accelerator for interpretation. The hardware accelerator interprets the Java bytecode and sends the interpreted code to processor 109 for execution. As for the source code corresponding to instructions that are not compatible with hardware accelerator 116, the mixed-mode compiler 102 compiles such code into native code and optimizes the native code with the help pf optimization engine 114. The mixed-mode compiler sends the optimized native code to processor 109 for execution.
FIG. 4 depicts a flowchart 120 of a process for providing a mixed-mode instruction stream. The process can be implemented on a system similar to, for example, system 100 (FIG. 1) or system 110 (FIG. 2) or system 111 (FIG. 3). In an embodiment, the process starts with an act 122 that includes distinguishing between a first instruction that is part of an instruction set and a second instruction that is not part of the instruction set. For example, the first instruction set is suitable for interpretation by a hardware accelerator, and the second instruction set may not be suitable for interpretation by a hardware accelerator. This may involve one or more comparisons between various instructions and/or the instruction set.
In an embodiment, the process continues with an act 124 that includes compiling the first instruction into a bytecode. The bytecode may be a Java bytecode. The act 124 may be carried out by a Jazelle-aware compiler, for example. In this case, the bytecode will be associated with an instruction that is Jazelle-compatible.
In an embodiment, the process continues with an act 126 that includes compiling the second instruction into native code. The native code may be ARM code, for example. The act 126 may be carried out by a Jazelle-aware compiler. In this example, the native code will be associated with an instruction that is not Jazelle-compatible. In an alternative example, the native code will be associated with an instruction that would be handled without taking advantage of Jazelle hardware acceleration.
In an embodiment, the process continues with an act 128 that includes providing the bytecode to the hardware accelerator for interpretation. The bytecode may be placed into a code buffer. The native code resulting from interpretation of the bytecode by the hardware accelerator is provided to the processor as a part of an instruction stream.
In an embodiment, the process continues with an act 130 that includes providing the native code created by Jazelle-aware compiler as part of the instruction stream. A Jazelle-aware compiler may use the native code as a trigger to cause a mode change from Jazelle mode to, for example, ARM mode. This process and other processes are depicted as serially arranged modules. However, modules of the processes may be reordered, or arranged for parallel execution as appropriate.
FIG. 5 depicts an example of a system 140 for generating and executing mixed-mode code according to an alternative embodiment. In this example, system 140 is similar to the system 100 (FIG. 1), but includes a Java compiler 142. The Java compiler 142 converts Java source code into Java bytecode. The mixed-mode compiler 102 then translates some of the Java bytecode into native code or leaves the Java bytecode as is. This alternative embodiment enables the use of an aspect of embodiments described herein with pre-compiled Java code.
The examples described thus far include components that may or may not be remotely located. For example, the mixed-mode compiler 102 and the mixed-mode interpreter 104 may be respectively located on a device for generating mixed-mode code and a device for executing the mixed-mode code.
FIG. 6 depicts an example of a device 150 for generating mixed-mode code. The device 150 includes an optional Java language editor 152, a Java mixed-mode compiler 154, an instruction set reference database 156, optimization rules 158, a code buffer 160, and a processor 162.
The optional Java language editor 152 facilitates the editing of Java source code by a Java programmer. Once the Java source code is ready, the Java mixed-mode compiler 154 can be used to translate the Java source code into mixed-mode code that includes Java bytecode and native code.
The mixed-mode compiler 154 may use the instruction set reference database 156 to distinguish between instructions that should be represented as Java bytecode and instructions that should be represented as native code. The instruction set reference database 156 may include a list of valid instructions, or general rules for determining whether an instruction is represented in a predetermined instruction set. Instructions that are not found or otherwise determinable from the instruction set reference database 156 are instead represented as native code.
The mixed-mode compiler 154 may use the optimization rules 158 to ensure that the native code is optimized. The optimization rules 158 may include procedures, techniques, or instructions useful for optimizing code, as should be understood to one who is skilled in the art of compilers.
The mixed-mode compiler 154 generates a mix of bytecode and native instructions for a given Java method. The mixed-mode compiler may be made, for example, Jazelle-aware using the instruction set reference database 156. In this example, the mixed-mode compiler 154 emits the Java bytecode that Jazelle can handle in-line into the code buffer 160 holding the compiler's generated output for the method. In this example, when the mixed-mode compiler 154 encounters bytecode(s) that Jazelle cannot handle, the mixed-mode compiler 154 emits the native translation of such bytecodes (having analyzed and optimised such bytecodes in light of all the bytecode in the method, using the optimization rules 158).
The mixed-mode compiler 154 uses the first of the unhandled instructions as a trigger to cause a mode change from Jazelle mode to, for example, ARM mode, and arranges for the software handler address for that instruction to be the start address of a native code sequence in the code buffer 160. The sequence itself may re-establish the software handler address to the system default, and ultimately terminates with a mode change back to Jazelle, if Jazelle- compatible instructions follow. Jazelle then fetches these instructions, and so on.
The mixed-mode compiler 154 may need to use the processor 162 to perform actions, as is well-understood in the art of computer engineering.
FIG. 7 depicts an example of a device 170 for executing mixed-mode code, according to certain embodiments. The device 170 includes a Java mixed-mode interpreter 172, a bytecode optimizer 174, and a processor 176. The Java mixed-mode interpreter may receive mixed-mode code that comprises native code and Java bytecode. The Java mixed-mode interpreter 172 executes Java methods by interpreting Java bytecodes in-line in an instruction stream, which converts the Java bytecodes into native code. The Java mixed-mode interpreter may use the bytecode optimizer 174, which may be a hardware or software accelerator, to efficiently interpret the Java bytecodes. Thus, an optimal instruction stream is presented to the processor. The mixed-mode interpreter 172 may need to use the processor 176 to perform actions, as is well- understood in the art of computer engineering.
FIG. 8 depicts an example of a device 179 for executing mixed-mode code, according to certain embodiments. The device 179 includes a native code processor 180, and a bytecode interpreter 173. The native code from the mixed mode code is sent to native code processor 180 for execution. Java bytecode from the mixed mode code is sent to bytecode interpreter 173 for efficient interpretation. The native code resulting from interpretation by bytecode interpreter 173 is sent to native code processor 180 for execution. Thus, an optimal instruction stream is presented to the processor 180 for execution.
The following description of FIGS. 9 and 10 is intended to provide an overview of computer hardware and other operating components suitable for performing the processes of the invention described herein, but is not intended to limit the applicable environments. Similarly,
the computer hardware and other operating components may be suitable as part of the apparatuses of the invention described herein. The invention can be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. The invention can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
FIG. 9 depicts a networked system 900 that includes several computer systems coupled together through a network 902, such as the Internet. The term "Internet" as used herein refers to a network of networks which uses certain protocols, such as the TCP/IP protocol, and possibly other protocols such as the hypertext transfer protocol (HTTP) for hypertext markup language (HTML) documents that make up the World Wide Web (the web). The physical connections of the Internet and the protocols and communication procedures of the Internet are well known to those of skill in the art.
The web server 904 is typically at least one computer system which operates as a server computer system and is configured to operate with the protocols of the world wide web and is coupled to the Internet. The web server system 904 can be a conventional server computer system. Optionally, the web server 904 can be part of an ISP which provides access to the Internet for client systems. The web server 904 is shown coupled to the server computer system 906 which itself is coupled to web content 908, which can be considered a form of a media database. While two computer systems 904 and 906 are shown in FIG. 9, the web server system 904 and the server computer system 906 can be one computer system having different software components providing the web server functionality and the server functionality provided by the server computer system 906, which will be described further below.
Access to the network 902 is typically provided by Internet service providers (ISPs), such as the ISPs 910 and 916. Users on client systems, such as client computer systems 912, 918, 922, and 926 obtain access to the Internet through the ISPs 910 and 916. Access to the Internet allows users of the client computer systems to exchange information, receive and send e-mails, and view documents, such as documents which have been prepared in the HTML format. These
documents are often provided by web servers, such as web server 904, which are referred to as being "on" the Internet. Often these web servers are provided by the ISPs, such as ISP 910, although a computer system can be set up and connected to the Internet without that system also being an ISP.
Client computer systems 912, 918, 922, and 926 can each, with the appropriate web browsing software, view HTML pages provided by the web server 904. The ISP 910 provides Internet connectivity to the client computer system 912 through the modem interface 914, which can be considered part of the client computer system 912. The client computer system can be a personal computer system, a network computer, a web TV system, or other computer system. While FIG. 9 shows the modem interface 914 generically as a "modem," the interface can be an analog modem, isdn modem, cable modem, satellite transmission interface (e.g. "direct PC"), or other interface for coupling a computer system to other computer systems.
Similar to the ISP 914, the ISP 916 provides Internet connectivity for client systems 918, 922, and 926, although as shown in FIG. 9, the connections are not the same for these three computer systems. Client computer system 918 is coupled through a modem interface 920 while client computer systems 922 and 926 are part of a LAN 730.
Client computer systems 922 and 926 are coupled to the LAN 930 through network interfaces 924 and 928, which can be Ethernet network or other network interfaces. The LAN 930 is also coupled to a gateway computer system 932 which can provide firewall and other Internet-related services for the local area network. This gateway computer system 932 is coupled to the ISP 916 to provide Internet connectivity to the client computer systems 922 and 926. The gateway computer system 932 can be a conventional server computer system.
Alternatively, a server computer system 934 can be directly coupled to the LAN 930 through a network interface 936 to provide files 938 and other services to the clients 922 and 926, without the need to connect to the Internet through the gateway system 932.
FIG. 10 depicts a computer system 1040 for use in the system 900 (FIG. 9). The computer system 1040 may be a conventional computer system that can be used as a client
computer system or a server computer system or as a web server system. Such a computer system can be used to perform many of the functions of an Internet service provider, such as ISP 910 (FIG. 9).
In the example of FIG. 10, the computer system 1040 includes a computer 1042, I/O devices 1044, and a display device 1046. The computer 1042 includes a processor 1048, a communications interface 1050, memory 1052, display controller 1054, non- volatile storage 1056, and I/O controller 1058. The computer system 1040 may be couple to or include the I/O devices 1044 and display device 1046.
The computer 1042 interfaces to external systems through the communications interface 1050, which may include a modem or network interface. It will be appreciated that the communications interface 1050 can be considered to be part of the computer system 1040 or a part of the computer 1042. The communications interface can be an analog modem, isdn modem, cable modem, token ring interface, satellite transmission interface (e.g. "direct PC"), or other interfaces for coupling a computer system to other computer systems.
The processor 1048 may be, for example, a conventional microprocessor such as an Intel Pentium microprocessor or Motorola power PC microprocessor. The memory 1052 is coupled to the processor 1048 by a bus 1060. The memory 1052 can be dynamic random access memory (DRAM) and can also include static ram (SRAM). The bus 1060 couples the processor 1048 to the memory 1052, also to the non- volatile storage 1056, to the display controller 1054, and to the I/O controller 1058.
The I/O devices 1044 can include a keyboard, disk drives, printers, a scanner, and other input and output devices, including a mouse or other pointing device. The display controller 1054 may control in the conventional manner a display on the display device 1046, which can be, for example, a cathode ray tube (CRT) or liquid crystal display (LCD). The display controller 1054 and the I/O controller 1058 can be implemented with conventional well known technology.
The non- volatile storage 1056 is often a magnetic hard disk, an optical disk, or another form of storage for large amounts of data. Some of this data is often written, by a direct memory access process, into memory 1052 during execution of software in the computer 1042. One of skill in the art will immediately recognize that the terms "machine-readable medium" or "computer-readable medium" includes any type of storage device that is accessible by the processor 1048 and also encompasses a carrier wave that encodes a data signal.
Objects, methods, inline caches, cache states and other object-oriented components may be stored in the non- volatile storage 1056, or written into memory 1052 during execution of, for example, an object-oriented software program. In this way, the components illustrated in, for example, FIGS. 1-3 and 6-7 can be instantiated on the computer system 1040.
The computer system 1040 is one example of many possible computer systems which have different architectures. For example, personal computers based on an Intel microprocessor often have multiple buses, one of which can be an I/O bus for the peripherals and one that directly connects the processor 1048 and the memory 1052 (often referred to as a memory bus). The buses are connected together through bridge components that perform any necessary translation due to differing bus protocols.
Network computers are another type of computer system that can be used with the present invention. Network computers do not usually include a hard disk or other mass storage, and the executable programs are loaded from a network connection into the memory 1052 for execution by the processor 1048. A Web TV system, which is known in the art, is also considered to be a computer system according to the present invention, but it may lack some of the features shown in FIG. 10, such as certain input or output devices. A typical computer system will usually include at least a processor, memory, and a bus coupling the memory to the processor.
In addition, the computer system 1040 is controlled by operating system software which includes a file management system, such as a disk operating system, which is part of the operating system software. One example of an operating system software with its associated file management system software is the family of operating systems known as Windows® from
Microsoft Corporation of Redmond, Washington, and their associated file management systems. Another example of operating system software with its associated file management system software is the Linux operating system and its associated file management system. The file management system is typically stored in the non- volatile storage 1056 and causes the processor 1048 to execute the various acts required by the operating system to input and output data and to store data in memory, including storing files on the non- volatile storage 1056.
Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as "processing" or "computing" or "calculating" or "determining" or "displaying" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
The present invention, in some embodiments, also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer
program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the processes of some embodiments. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language, and various embodiments may thus be implemented using a variety of programming languages.
While this invention has been described by way of example in terms of certain embodiments, it will be appreciated by those skilled in the art that certain modifications, permutations and equivalents thereof are within the inventive scope of the present invention. It is therefore intended that the following appended claims include all such modifications, permutations and equivalents as fall within the true spirit and scope of the present invention; the invention is limited only by the claims.