WO1994020905A1 - Method and apparatus for memory management - Google Patents

Method and apparatus for memory management Download PDF

Info

Publication number
WO1994020905A1
WO1994020905A1 PCT/US1994/002523 US9402523W WO9420905A1 WO 1994020905 A1 WO1994020905 A1 WO 1994020905A1 US 9402523 W US9402523 W US 9402523W WO 9420905 A1 WO9420905 A1 WO 9420905A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
module
modules
data
transient
Prior art date
Application number
PCT/US1994/002523
Other languages
French (fr)
Inventor
Grant G. Echols
Original Assignee
Novell, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Novell, Inc. filed Critical Novell, Inc.
Priority to EP94909862A priority Critical patent/EP0688449A4/en
Priority to JP6520278A priority patent/JPH08507630A/en
Priority to AU62537/94A priority patent/AU6253794A/en
Publication of WO1994020905A1 publication Critical patent/WO1994020905A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • G06F12/0615Address space extension
    • G06F12/0623Address space extension for memory modules

Definitions

  • This invention relates generally to the field of computer memory systems and in particular to a memory management system for control and optimization of computer memory use.
  • a typical computer system consists of a number of modules or components.
  • Computer systems typically include a central processing unit (CPU) such as a microprocessor.
  • the microprocessor is a program-controlled device that obtains, decodes and executes instructions.
  • a computer system also includes storage components for storing system operating software, application program instructions and data. These storage components may be read only memory (ROM), random access memory (RAM), or mass storage such as disk or tape storage, or any other suitable storage means.
  • the operation of a computer system is controlled by a series of instructions known as the "operating system".
  • the operating system is used to control basic functions of the computer system, such as input/output, and is typically stored in a permanent storage element such as a ROM or disk storage which is then loaded and run in RAM. Examples of operating systems include MS-DOS or PC-DOS.
  • Computer systems are used to execute application programs. During some processing operations, the CPU may also require the storage of data temporarily while instructions are executed on that data.
  • the application program that controls the processing and the operating system under which the program runs must be accessible to the CPU. This information is made available to the CPU by storing it in RAM, a resource known as "main memory".
  • main memory The memory component known as main memory is a scarce resource that is dynamically allocated to users, data, programs or processes.
  • Memory users competing for this scarce resource include application programs, TSR's ("terminate and stay resident" programs) and other processes.
  • application programs include word processors, spreadsheets, drawing programs, databases, etc.
  • Certain application programs may be stored in ROM. Generally, however, application programs are stored on a mass storage device, such as a disk drive. Upon initialization, application programs that are to be executed by the CPU are transferred from mass storage to RAM.
  • TSR's are also used on computer systems. Such programs provide "hot keys” and “pop-up windows” and are used to perform background tasks, such as displaying a clock in the corner of the video display or monitoring disk drive activity. TSR's have been written for many other applications, and users often desire to have several TSR's resident on their computers simultaneously. Since each TSR requires memory space in which it is located, adding TSR's to the system also increases memory demand on main memory.
  • Main memory is typically a silicon-based memory such as a RAM.
  • dynamic random access memory DRAM
  • Main memory RAM can be accessed as conventional memory, extended memory and expanded memory.
  • Conventional memory is a region of RAM that is most easily accessed by the CPU. It is desirable to have data and instructions needed by the CPU to be stored in conventional memory. However, there are limits on the size of conventional memory that limit the amount of data and instructions that can be stored in conventional memory.
  • Extended memory is RAM greater than 1024 Kbytes that can be directly addressed by microprocessors with sufficient numbers of address lines.
  • Intel 80286 microprocessor has a 24 bit addressing capability and can address 15M of extended memory above the first 1M of memory.
  • Intel 80386 and 80486 microprocessors have 32 bit addressing capability and can address approximately 4 Gigabytes of extended memory above the first 1M of memory.
  • Expanded memory also known as “expanded memory specification” (EMS) reserves a portion of extended memory, which is not directly accessible, for use as expanded memory and divides it into pages. By switching the expanded memory one page at a time into the address space which is directly accessible by the CPU, EMS is able to access a virtually unlimited amount of memory. However, EMS takes time to change pages. If the desired data is not in the EMS page frame located in directly accessible memory, EMS must page out the current contents of the page frame and page in the page from expanded memory which contains the desired data. Since such a page change requires time, the processing speed of the computer is reduced. Also, EMS is not generally applicable to all application software. Application software must be written specifically to take advantage of EMS if it is available. Memory Map
  • Figure 1 illustrates a memory map of main memory RAM on a typical computer, namely, a computer based on the 8088/8086 family of microprocessors and operating under MS-DOS, such as an IBM Personal Computer.
  • the memory map of Figure 1 is not the only possible memory map, but is an example of a typical memory map.
  • the memory map is organized from the bottom up, with memory location zero (101) at the bottom, continuing through the highest location in memory at the top.
  • the memory map has three basic areas.
  • the first area includes the lowest 640K of memory and is referred to as conventional memory 102.
  • Conventional memory 102 consists entirely of RAM, which allows both read and write operations, although not all systems include the entire 640K, and some memory space may be left unused.
  • MS-DOS uses the lowest portion of memory to store its own code 106 and associated data 107. Above that, MS-DOS stores application software, TSR's and device drivers, collectively illustrated as element 112.
  • reserved memory 103 which lies in the memory addresses above the 640K RAM limit and 1024K.
  • the reserved memory area 103 is occupied mainly by ROM's, which are read only devices.
  • the ROM's found in the reserved memory area include the system ROM, video ROM and perhaps ROM for other peripheral devices, such as hard disk drives or network interfaces.
  • the reserved memory area 103 also includes other types of specialized memory, e.g. video frame buffers.
  • System ROM which supports the basic operations of the computer, typically occupies, for example, the highest 64K of the reserved memory area 103 from 960K to 1024K.
  • the remaining space in the reserved memory area 103 is either unused or used for other purposes, including ROM's which support other peripheral devices or an EMS page frame.
  • extended memory 111 includes all memory above IM.
  • Microprocessors with a 24-bit addressing capability such as the 80286, can address up to 16M of memory, including 15M of extended memory 111 in addition to the IM of reserved memory 103 and conventional memory 102.
  • Microprocessors which have a 32-bit addressing capability such as the 80386 and 80486, can address up to 4G of memory, which includes 4095M of extended memory 111 in addition to the IM of reserved memory 103 and conventional memory 102.
  • the area of extended memory 111, located just above IM, is sometimes referred to as the high memory area 113.
  • the amount of conventional memory space available for application programs can be increased by reducing the amount of memory used by TSR's, reducing the number of TSR's, or by relocating TSR's to memory outside of conventional memory.
  • Reducing the number of TSR's is not desired because it can result in reduced functionality or performance. Reducing the amount of conventional memory used by each TSR is not practical because it requires rewriting each TSR.
  • TSR's occupy some of the conventional memory space which could otherwise be used by application software, more memory would be available for applications software if these programs could be moved to memory outside of conventional memory 102.
  • One prior art method increases the amount of conventional memory 102 available for application software by moving TSR's from conventional memory 102 to reserved memory 103.
  • the amount of unused reserved memory 103 must first be determined. Also, the amount of conventional memory occupied by TSR's must be determined. Then, a sufficient amount of unallocated RAM from extended memory must be mapped into the reserved memory area to provide memory space for the TSR's. Next, the TSR's are relocated to the available memory in the reserved memory area 103.
  • This prior art method has disadvantages. First it, can use only unallocated reserved memory space. It cannot use reserved memory space that has been allocated to ROM's, video frame buffers, or other uses. Also, the relocation of TSR's must be done to ensure that all references to them are redirected to their new locations.
  • Another prior art method for relocating from conventional memory is known as an "overlay". Overlays are executable portions of programs that are swapped in and out of memory as needed. Overlays have the disadvantage of requiring the program using the overlays to be linked with an overlay manager that controls access to the functions and data that are located in the overlays.
  • FIG. 2A An example of the overlay scheme is illustrated in Figures 2A and 2B.
  • Memory block 201 includes a region of conventional memory 102.
  • Conventional memory 102 includes region 202 at the lower addresses for storing operating system code, such as DOS.
  • the remainder 112 of conventional memory 102 above the DOS region 202 and below reserved memory 103 is used for storing applications, TSR's, device drivers, etc.
  • Word Perfect a word processing application. Therefore, only a portion of Word Perfect is stored in memory region 112 at any one time. The remaining portions are stored in other memory, such as disk storage 204. As portions of the application are needed, they are transferred from disk storage 204 to memory region 112.
  • a portion of the application represented by block A 203 is stored in memory region 112.
  • the application is divided into two other portions, block B 205 and block C 206, that are stored in disk storage 204.
  • Disk storage 204 is coupled to RAM 201 through a bus or other transmission means 207.
  • block A 203 When the functionality contained in block A 203 is being called or used, block A 203 remains resident in memory region 112. When other functionality is required, the code providing that functionality must be transferred from the disk storage 204 to the memory 201. Referring now to Figure 2B, block B 205 is transferred from disk storage 204 to memory region 112. Accordingly, the application portion block A 203 previously residing in memory region 112 is transferred to disk storage 204. Block B 205 is shown graphically as taking up more of memory region 112 than did block A 203. The blocks need not be the same size, but must be as small as the available address space in memory region 112.
  • a disadvantage of overlays is that they are swapped to and from a disk file, increasing execution time and reducing performance.
  • Certain programs require some code or data to always be resident in conventional memory. Overlays do not provide any method of identifying code or data that permanently resides in conventional memory.
  • blocks A 203 and B 205 are swapped, they are swapped in entirety. No portion of block A 203 remains in memory region 112. Therefore, such programs cannot use the overlay scheme.
  • Some TSR's cannot be converted to overlays. These TSR's include those that provide internal DOS functionality.
  • NetWare DOS Client software produced by Novell, Inc. of Provo, Utah.
  • the present invention provides DOS executable programs that are constructed so that they can cooperate with other similarly constructed programs in sharing memory used by portions of their code or data. This reduces the overall requirements of conventional memory.
  • these programs are referred to as modules and can be application programs, TSR's or any other executable file which are converted into this module format.
  • the modules include transient code and data that may be swapped out of conventional memory, and global code and data that is not swappable and stays resident in conventional memory.
  • modules of the present invention can call other modules, as well as interrupt handlers and other code or data that is externally or asynchronously available.
  • the present invention thus supports more programming interfaces than overlays.
  • the present invention reduces conventional memory requirements by allowing modules to share a single block of conventional memory. Regardless of the number of modules, the size of the block of conventional memory allocated for their use remains the same.
  • the memory block allocated is of a size large enough to store the largest block of transient code or data of the modules that share the conventional memory block.
  • the transient blocks are swapped between conventional memory and extended or expanded memory. This significantly reduces transfer time compared to overlay schemes, improving performance.
  • Figure 1 is a memory map illustrating the memory organization of a typical computer system.
  • Figures 2A and 2B illustrate a prior art overlay scheme.
  • FIG. 3 illustrates the module structure of the present invention.
  • Figure 4A - 4E illustrate operation of the memory manager.
  • Figure 5 is a flow diagram illustrating the pre-init operation of the invention.
  • Figure 6 is a flow diagram illustrating the real-init operation of the invention.
  • Figure 7 is a flow diagram illustrating the calling of a module in the invention.
  • Figure 8 is a block diagram of a computer system for implementing the invention.
  • Figure 9 illustrates a VMCB data block. DETAILED DESCRIPTION OF THE PRESENT INVENTION
  • the present invention reduces conventional memory requirements by providing a method and apparatus for TSR's and other programs to share space in conventional memory.
  • the TSR's and programs are configured as "modules" and a module manager controls the swapping of modules into and out of conventional memory.
  • a module manager controls the swapping of modules into and out of conventional memory.
  • only one module is resident in conventional memory at any one time.
  • the other modules are stored in extended memory or expanded memory. This avoids time consuming and performance reducing disk swaps.
  • Figure 4A illustrates a memory map of conventional, reserved, extended and expanded memory.
  • Conventional memory 102 includes DOS storage region 202 at the lower memory area. Just above the DOS region 202 in the convention memory 102 is space reserved for the module manager 401. Above the module manager 401 is a region reserved for global code and data of any modules available to the system. Stored above the global code and data region 402 is a region reserved for transient blocks of the modules.
  • the transient block module 403 has an upper boundary 404 which is fixed and determined by the largest transient block of any module in the system.
  • the space in conventional memory 102 above upper boundary 404 and below reserve memory 103 is available for other applications and processes.
  • Reserve memory 103 is used for ROMs and for expanded memory 406. Above reserved memory 103 is extended memory 111 (above 1024 Kbytes).
  • the extended memory 111 stores the transient block 405 of a module referred to as module A.
  • the expanded memory 406 includes the transient blocks 407 and 408 of modules referred to as modules B and C, respectively.
  • module manager After the global code and data have been stored in global region 402 and the upper bound 404 of transient block 403 has been defined, the module manager is ready to access modules as they are called. Referring to Figure 4B, consider the case where module B is requested. Module B may include code permanently resident in the global region 402. When module B is called, the module manager transfers the module B transient block 407 from expanded memory 406 into the transient block 403 of conventional memory 102. Any transient block of other modules is swapped out of conventional memory at this time. The process calling module B then can access it as if module B were always resident in conventional memory. As seen in Figure 4B, the transient block 407 of module B does not use all of the available address space in transient block 403. However, the upper bound 404 of transient block 403 remains fixed.
  • FIGs 4C-4E illustrate the process of a module calling another module.
  • the transient block 408 of module C is resident in transient block 403 of conventional memory 102 as illustrated in Figure 4C.
  • Module C calls module A through the module manager.
  • the module manager keeps track of the calling module and the destination module.
  • the transient block 408 of module C is swapped with the transient block 405 of module A.
  • the transient block 405 of module A is now stored in the transient block 403. Note that the transient block 405 of module A occupies all the address space of transient block 403.
  • the module manager swaps the transient block 405 of module A with the transient block 408 of module C, as illustrated in Figure 4E.
  • code and data of modules are structured into different categories so that a general solution is provided for management of the modules in conventional memory.
  • These categories include a function dispatch table (jump table), transient code and data, global code and data, and start-up code and data.
  • the format of a module is illustrated in Figure 3.
  • the structure of the module 301 includes three regions 302, 303 and 304.
  • Region 302 includes a jump table 305 and a transient group 306.
  • Region 303 contains global code and data and region 304 includes start-up code and data.
  • the transient region 302 includes code and data that is "swappable.” That is, this data can be swapped in and out between conventional memory and enhanced memory (extended or expanded).
  • the jump table 306 represents functions supported by the module.
  • the jump table (also referred to as transient code) can be swapped out at any time. Therefore interrupt handlers (or any code segment which can be directly accessed) cannot exist in the transient code segment.
  • the first entry in the jump table is the pointer to the module init routine.
  • the jump table 306 consists of pointers to at least four predefined functions which are common to all modules, namely, init, unload, version, and statistics.
  • Init is an initialization routine in the module. This initialization routine performs pre-init to obtain the necessary parameters for initializing a module and then performs a real-init to actually load the module into the appropriate segment. It is within this real-init process that vectors are hooked.
  • Unload The unload function facilitates release of resources.
  • the version function obtains the major and minor version of the module. This function provides commonality between modules. Other subfunctions of this version function also establish commonality between modules. The other subfunctions (Olh-OBh) also facilitate inter-module communication (multi-casting) regarding connection establishment, disconnection, etc.
  • the statistics function obtains statistics for the module. This function is optional. The statistics are for module debugging.
  • the statistics structure is a length-preceded buffer; the first word of the statistics structure table indicates the number of bytes of valid statistics information. All other statistics information needs to be recorded between the stat size and the stat end fields.
  • DDO terminating zero
  • Transient data like transient code, has specific requirements. Certain types of data cannot exist in the transient group. These restricted types include stacks and data which is passed to another module. For performance reasons, all data that is not global should be in this transient data segment. Transient data is swapped between enhanced memory and conventional memory. GLOBAL CODE AND DATA
  • the global region 303 stores code and data that must remain resident in conventional memory.
  • code and data that may be stored in the global region 303 include interrupt handlers for global code, far call handlers not accessed through the module manager, stacks, and data passed to other modules. It is possible to create a module with no memory in the global region.
  • the startup region includes the initialization code.
  • the code provides pre-init and real-init operations.
  • each module provides a module manager with a VMCB structure data block.
  • the modules use the init routine to report the amount of initialization memory required (including global and transient memory segments) through this VMCB structure.
  • the VMCB_InitImageParaLength parameter defines the preload total size.
  • modules manager Overseeing the individual modules and multiplexors is the module manager. Applications make all calls to the module manager, which directs requests to their proper destinations, whether that be another module (child) or multiplexor, the module manager also ensures that replies return to their appropriate callers.
  • Every module calls other modules via the module manager. Even when a module is calling its particular layer's multiplexor, it makes that call to the module manager, which then calls the particular multiplexor. Likewise, when a call returns from a particular module, that call makes its way from the multiplexor to the particular module via the module manager.
  • the module manager One responsibility of the module manager is ensuring that API calls between the other pieces are properly routed.
  • the module manager must therefore know if the modules required by a given user application have loaded. It polices modules that call other modules.
  • the module manager handles modules and functions that are called asynchronously as well.
  • the module manager provides the APIs to call a function and a module by number.
  • the module manager also knows who is calling that function, via the caller's module ID. Module IDs are pre-assigned. Because the module manager handles all APIs on a number basis, the module manager has no intrinsic dependencies or ties to the individual functions in the modules. Consequently, modules that are not part of the specific Requester model can use the module manager as a TSR memory manager. This is what gives the module manager the capability of supporting varied and diverse TSRs.
  • the module manager provides the basic services that all modules require, especially those allowing and facilitating calls between all loaded modules. Therefore, the module manager encompasses all layers of the Requester model.
  • the module manager not only loads and unloads each module, including multiplexors, it also handles memory services (allocation and management) for all modules.
  • the module manager employs memory swapping for its modules.
  • the module manager decides whether a given module uses expanded memory, extended memory, conventional memory, or any memory type supported, without affecting the modules themselves. The individual child modules are therefore freed from these memory concerns.
  • Child modules must still conform to certain requirements for memory usage. Once they have done so, they gain all the advantages of having the module manager handle memory mechanisms for them.
  • the module manager is also responsible for loadtime-configuration APIs. Any module may be configurable. For instance, the connection table wants a certain number of connections, or IPX wants to support a number of ECBs, or bigger or smaller buffers. In these instances, the module manager does the work for the module.
  • a user may want to load a module in non-swapped memory, regardless of the memory type being used, in order to optimize performance. Ideally, optimal configuration variations are administered by the module manager.
  • An API is provided for modules that may desire this capability.
  • the Requester includes an API wherein a module can specify its own configuration options, including the tokens used to recognize those options. This is a table-driven API that can be called at startup of a module to parse configuration information from the NET.CFG file.
  • the module manager When the module manager loads, it reads from a file, within the current or other-specified directory, what modules it should load. The module manager loads those modules in the file-specified order.
  • VLM C: ⁇ NWCLIENT ⁇ CONN.VLM
  • VLM /C C ⁇ NWCLIENT ⁇ NET.CFG
  • a default set of modules is hard-coded into the module manager for basic functionality.
  • NET.CFG can be used to override the default options.
  • the module manager returns an address in ES:BX that is a pointer to the far call address (the module manager).
  • AX is zeroed out to let the application know the module manager handled the call.
  • the module manager uses a call-by-number system which uniquely identifies modules in the system. Fimctions are also called by number. This protocol requires that applications provide three essential pieces of information to the module manager, including CALLERJD, DESTJD, and DEST_FUNC. Each of these three numbers is of size WORD.
  • CALLER_ID and DESTJD are numbers which uniquely identify a particular module. The module manager uses these numbers to swap in the proper module and dispatch the call appropriately.
  • DEST_FUNC indicates what function the caller wants to perform. Destination functions are defined within the individual module. These functions are defined by the modules jump table region 305.
  • code ends The following code is an example of a non-module application using the far call address to make a request of _VLM Notify to return the version of the module manager module.
  • the caller first pushes its own ID on the stack. (CALLER JDs are non-zero for modules and 0 for applications.) Then, the caller pushes the destination ID— specifically, the ID of the multiplexor module or of the specific child module it wants to call. Finally, the caller pushes the desired function number.
  • function zero There are two reserved (or system-only) functions: function zero and function two, used for initialization and unload respectively. Functions one and three are also used consistently across the various modules. One and three are used for a generic notify function with multiple subfunctions and module statistics respectively.
  • the module manager clears the CALLER JD, DESTJD and DEST_FUNC from the stack, so an application must not do so. This is commonly referred to as the Pascal calling convention.
  • Two registers are used by the module manager: AX and BP.
  • BP is for internal use by the module manager only. Applications should not use BP for any parameter request. BP may be used only to push on the stack the three required values. Applications should use AX for return codes only.
  • the module manager also provides a method for handling asynchronous calls.
  • the calling function provides a pointer to a block of memory that includes the destination ID, destination function, caller ID, and also any registers that need to be set up for the call. The request can then be put on hold and executed at a later time. When the module manager determines that it can't execute code that is needed, the module manager can put off the execution of the code. The caller of the function can receive control back before the function is actually completed.
  • FIG. 5 A flow diagram of the pre-init operation of the present invention is illustrated in Figure 5.
  • the configuration could include a list of new or additional modules.
  • the module manager then loads each module on the current list one at a time at step 502.
  • the module is loaded using the load overlay API at step 503.
  • the initialization function is called with the "fake- init" flag on (indicating pre-init) at step 504.
  • the module being loaded provides to the module manager a VMCB data block at step 505.
  • the module manager reads the memory requirements of the module, i.e. the initialization memory requirements, any global memory requirements, and transient memory requirements.
  • the module manager stores the parameters for the module in a module parameter table.
  • the parameters include initialization memory requirements, global memory requirements (if any), transient memory requirements, and the number of functions of the module.
  • the module manager proceeds to decision block 508.
  • decision block 508 the argument "last module?" is made. If the argument is false, there is more module memory information to be obtained and the module manager returns to step 505. If the argument at decision block 508 is true, the memory requirements of all modules have been obtained and the module manager proceeds to step 509.
  • the module manager collects the data from the module parameter table to determine the memory needed for global data. The module manager then allocates address space in conventional memory for storing the global data.
  • the module manager identifies the largest transient memory requirement of the modules that have are to be loaded.
  • the size of the largest transient block defines the size of the transient block 403 in conventional memory.
  • the module manager allocates RAM for the transient memory block of a size at least as large as the largest block of the modules to be loaded.
  • the module manager determines what address space can be used for storing the transient blocks of each module.
  • the order of preference is first extended memory, then expanded memory, then conventional memory.
  • the user can configure the module manager to use one type of memory in particular.
  • the module manager determines at step 512 if one type of memory is enabled for use. If a type of memory is not enabled, the module manager selects a memory type using the aforementioned heuristic.
  • VMCB data block An example of a VMCB data block is illustrated in Figure 9.
  • the modules are loaded again with the fake- init flag not set, so that the real-init routine is executed.
  • the operation of the real-init routine is illustrated in Figure 6.
  • the module manager sets the fake-init flag to zero.
  • the first module is loaded.
  • the initialization routine is called.
  • any interrupt vectors for the module are hooked.
  • any other resources of the modules are allocated.
  • global code and data is moved to the allocated address space.
  • the transient block of the module is copied into the address space allocated for that module's transient block (in extended, expanded, or conventional memory).
  • the argument "last module?" is made. If the argument is true, the real-init procedure ends at step 609, where the module manager terminates and stays resident. If the argument at decision block 608 is false, the system returns to step 602.
  • FIG. 7 A flow diagram illustrating module loading is illustrated in Figure 7.
  • the module manager receives a call to access a module.
  • the argument "other request being serviced?" is made. If the argument is true, the module manager stores the current ID at step 703 so that when the exiting call is completed, the new call can be made.
  • any module currently in the transient block is copied to its allocated address space in enhanced memory. Any changes to code or data of the module are consequently updated in the "home" address space (allocated address space of that module).
  • the transient block of the called module is mapped in from its allocated address space in enhanced memory.
  • the called function number is referenced in the jump table now stored in transient memory.
  • the function is called. On return, the caller ID is checked at 711. If necessary, the calling module is mapped into the transient block.
  • control is returned to the calling function or process. The above example applies when an application calls a module function.
  • the present invention is also used when the global memory of a module is calling a transient memory function and when a module in transient memory calls the transient block of another module.
  • a global to transient call the operation is the same as for the application call of Figure 7.
  • the caller ID of the global requester is 0.
  • the proper caller ID is pushed so that when the caller is mapped back in and control is returned, so that the requesting transient code is actually in memory. Otherwise, it is possible for some other process to get mapped back into memory, which could result in an unstable environment for the computer system.
  • the present invention may be practiced on any computer system.
  • a typical computer system for practicing the present invention is illustrated in Figure 8.
  • the computer system includes a CPU 801, RAM (main memory) 802, ROM (read only memory) 803, and I/O (input/output) 804 all coupled to system bus 807.
  • the I/O block 804 provides access to other systems such as mass storage 806 through bus 805.
  • the CPU 801 controls the computer, executes instructions and processes data.
  • the CPU 801 communicates with the other components via the system bus 807.
  • the CPU receives input data from the other components of the computer over the system bus 807 and sends output data to the other components of the computer over the system bus.
  • the system bus 807 usually includes an address bus, a data bus and various other control lines. The width of the address and data buses, as well as the number and type of control lines, varies from one computer system to another.
  • Each component of the computer system including RAM 802 , ROM 803, and memory mapped I/O 804, contains a number of individual memory locations. To allow the CPU 801 to access these locations, each location is assigned a specific address. Each address is a specific combination of binary values which can be transmitted over the address bus.
  • addresses for all of the locations of a single memory device are usually assigned as a contiguous block. These blocks are often assigned addresses (mapped into memory) in a contiguous manner, as well. However, there may be gaps of unassigned addresses or addresses reserved for future use.

Abstract

The present invention provides an environment for DOS executable programs or 'modules' that are constructed so that they can cooperate with other similarly constructed programs in sharing memory used by portions of their code or data. This reduces the overall requirements of conventional memory (102). The modules include transient code and data (403), that may be swapped out of conventional memory, and global code and data (402) that is not swappable and stays resident in conventional memory. The present invention reduces conventional memory requirements by allowing modules to share a single block of conventional memory for transient code and data. Instead of swapping transient blocks of modules to and from disk storage (204), the transient blocks are swapped between conventional memory and extended (111) or expanded memory (406). This significantly reduces transfer time compared to overlay schemes, improving performance.

Description

METHQP ANP APPARATUS FOR MEMORY MANAGEMENT
BACKGROUND OF THE PRESENT INVENTION
1. FIELD OF THE INVENTION
This invention relates generally to the field of computer memory systems and in particular to a memory management system for control and optimization of computer memory use.
2. BACKGROUND ART
A typical computer system consists of a number of modules or components. Computer systems typically include a central processing unit (CPU) such as a microprocessor. The microprocessor is a program-controlled device that obtains, decodes and executes instructions. A computer system also includes storage components for storing system operating software, application program instructions and data. These storage components may be read only memory (ROM), random access memory (RAM), or mass storage such as disk or tape storage, or any other suitable storage means.
The operation of a computer system is controlled by a series of instructions known as the "operating system". The operating system is used to control basic functions of the computer system, such as input/output, and is typically stored in a permanent storage element such as a ROM or disk storage which is then loaded and run in RAM. Examples of operating systems include MS-DOS or PC-DOS. Computer systems are used to execute application programs. During some processing operations, the CPU may also require the storage of data temporarily while instructions are executed on that data. In addition, the application program that controls the processing and the operating system under which the program runs must be accessible to the CPU. This information is made available to the CPU by storing it in RAM, a resource known as "main memory".
The memory component known as main memory is a scarce resource that is dynamically allocated to users, data, programs or processes. Memory users competing for this scarce resource include application programs, TSR's ("terminate and stay resident" programs) and other processes. Examples of application programs include word processors, spreadsheets, drawing programs, databases, etc. Certain application programs may be stored in ROM. Generally, however, application programs are stored on a mass storage device, such as a disk drive. Upon initialization, application programs that are to be executed by the CPU are transferred from mass storage to RAM.
TSR's are also used on computer systems. Such programs provide "hot keys" and "pop-up windows" and are used to perform background tasks, such as displaying a clock in the corner of the video display or monitoring disk drive activity. TSR's have been written for many other applications, and users often desire to have several TSR's resident on their computers simultaneously. Since each TSR requires memory space in which it is located, adding TSR's to the system also increases memory demand on main memory.
Main memory is typically a silicon-based memory such as a RAM. Alternatively, dynamic random access memory (DRAM) is used as the main memory. Main memory RAM can be accessed as conventional memory, extended memory and expanded memory. Conventional Memory
Conventional memory is a region of RAM that is most easily accessed by the CPU. It is desirable to have data and instructions needed by the CPU to be stored in conventional memory. However, there are limits on the size of conventional memory that limit the amount of data and instructions that can be stored in conventional memory.
Early microcomputers had 16-bit address buses, which provide 64K of address space. As the amount of memory required by applications increased, microcomputers had to overcome the limitations of a 16-bit address bus. Thus, the IBM Personal Computer was introduced with a segmented addressing scheme which supported a 20-bit address bus. A 20-bit address bus provides 1024K, or 1M, of address space, which is 16 times that of a 16-bit bus. The original IBM PC and system software was designed with an artificial limit for conventional memory of 640K. The 384K of address space from 640K to 1M was reserved for future use, primarily for ROM's. Since subsequent models of computers have been designed to be backwards-compatible with the original IBM PC and to use the same PC-DOS MS-DOS operating system, the 640K limit on conventional memory continues to constrain the amount of memory available to applications on modern computers. Extended Memory
Extended memory is RAM greater than 1024 Kbytes that can be directly addressed by microprocessors with sufficient numbers of address lines. For example the Intel 80286 microprocessor has a 24 bit addressing capability and can address 15M of extended memory above the first 1M of memory. Intel 80386 and 80486 microprocessors have 32 bit addressing capability and can address approximately 4 Gigabytes of extended memory above the first 1M of memory.
Expanded Memory
Expanded memory, also known as "expanded memory specification" (EMS) reserves a portion of extended memory, which is not directly accessible, for use as expanded memory and divides it into pages. By switching the expanded memory one page at a time into the address space which is directly accessible by the CPU, EMS is able to access a virtually unlimited amount of memory. However, EMS takes time to change pages. If the desired data is not in the EMS page frame located in directly accessible memory, EMS must page out the current contents of the page frame and page in the page from expanded memory which contains the desired data. Since such a page change requires time, the processing speed of the computer is reduced. Also, EMS is not generally applicable to all application software. Application software must be written specifically to take advantage of EMS if it is available. Memory Map
Figure 1 illustrates a memory map of main memory RAM on a typical computer, namely, a computer based on the 8088/8086 family of microprocessors and operating under MS-DOS, such as an IBM Personal Computer. The memory map of Figure 1 is not the only possible memory map, but is an example of a typical memory map. The memory map is organized from the bottom up, with memory location zero (101) at the bottom, continuing through the highest location in memory at the top. The memory map has three basic areas. The first area includes the lowest 640K of memory and is referred to as conventional memory 102. Conventional memory 102 consists entirely of RAM, which allows both read and write operations, although not all systems include the entire 640K, and some memory space may be left unused. Conventional memory 102 is used to store system software, application software, user data and other code and data, including TSR's and device drivers. As illustrated in Figure 1, MS-DOS uses the lowest portion of memory to store its own code 106 and associated data 107. Above that, MS-DOS stores application software, TSR's and device drivers, collectively illustrated as element 112.
Above conventional memory is the 384K of reserved memory 103, which lies in the memory addresses above the 640K RAM limit and 1024K. The reserved memory area 103 is occupied mainly by ROM's, which are read only devices. The ROM's found in the reserved memory area include the system ROM, video ROM and perhaps ROM for other peripheral devices, such as hard disk drives or network interfaces. In addition to ROM's, the reserved memory area 103 also includes other types of specialized memory, e.g. video frame buffers.
System ROM, which supports the basic operations of the computer, typically occupies, for example, the highest 64K of the reserved memory area 103 from 960K to 1024K. The remaining space in the reserved memory area 103 is either unused or used for other purposes, including ROM's which support other peripheral devices or an EMS page frame.
As noted above, extended memory 111 includes all memory above IM. Microprocessors with a 24-bit addressing capability, such as the 80286, can address up to 16M of memory, including 15M of extended memory 111 in addition to the IM of reserved memory 103 and conventional memory 102. Microprocessors which have a 32-bit addressing capability, such as the 80386 and 80486, can address up to 4G of memory, which includes 4095M of extended memory 111 in addition to the IM of reserved memory 103 and conventional memory 102. The area of extended memory 111, located just above IM, is sometimes referred to as the high memory area 113.
As TSR's proliferate, and as application programs become larger, the competition for space in conventional memory increases. It is desirable to provide more conventional memory for application programs while preserving the ability to use TSR's.
The amount of conventional memory space available for application programs can be increased by reducing the amount of memory used by TSR's, reducing the number of TSR's, or by relocating TSR's to memory outside of conventional memory.
Reducing the number of TSR's is not desired because it can result in reduced functionality or performance. Reducing the amount of conventional memory used by each TSR is not practical because it requires rewriting each TSR.
Since TSR's occupy some of the conventional memory space which could otherwise be used by application software, more memory would be available for applications software if these programs could be moved to memory outside of conventional memory 102. One prior art method increases the amount of conventional memory 102 available for application software by moving TSR's from conventional memory 102 to reserved memory 103.
To implement this method, the amount of unused reserved memory 103 must first be determined. Also, the amount of conventional memory occupied by TSR's must be determined. Then, a sufficient amount of unallocated RAM from extended memory must be mapped into the reserved memory area to provide memory space for the TSR's. Next, the TSR's are relocated to the available memory in the reserved memory area 103.
This prior art method has disadvantages. First it, can use only unallocated reserved memory space. It cannot use reserved memory space that has been allocated to ROM's, video frame buffers, or other uses. Also, the relocation of TSR's must be done to ensure that all references to them are redirected to their new locations. Another prior art method for relocating from conventional memory is known as an "overlay". Overlays are executable portions of programs that are swapped in and out of memory as needed. Overlays have the disadvantage of requiring the program using the overlays to be linked with an overlay manager that controls access to the functions and data that are located in the overlays.
An example of the overlay scheme is illustrated in Figures 2A and 2B. Referring first to Figure 2A, the first 1024 bytes of RAM that are accessible by the CPU are shown as memory block 201. Memory block 201 includes a region of conventional memory 102. Conventional memory 102 includes region 202 at the lower addresses for storing operating system code, such as DOS. The remainder 112 of conventional memory 102 above the DOS region 202 and below reserved memory 103 is used for storing applications, TSR's, device drivers, etc.
Some applications are too large to be stored entirely in conventional memory region 112. One example of such an application is Word Perfect, a word processing application. Therefore, only a portion of Word Perfect is stored in memory region 112 at any one time. The remaining portions are stored in other memory, such as disk storage 204. As portions of the application are needed, they are transferred from disk storage 204 to memory region 112.
In the example of Figure 2A, a portion of the application represented by block A 203 is stored in memory region 112. The application is divided into two other portions, block B 205 and block C 206, that are stored in disk storage 204. Disk storage 204 is coupled to RAM 201 through a bus or other transmission means 207.
When the functionality contained in block A 203 is being called or used, block A 203 remains resident in memory region 112. When other functionality is required, the code providing that functionality must be transferred from the disk storage 204 to the memory 201. Referring now to Figure 2B, block B 205 is transferred from disk storage 204 to memory region 112. Accordingly, the application portion block A 203 previously residing in memory region 112 is transferred to disk storage 204. Block B 205 is shown graphically as taking up more of memory region 112 than did block A 203. The blocks need not be the same size, but must be as small as the available address space in memory region 112.
A disadvantage of overlays is that they are swapped to and from a disk file, increasing execution time and reducing performance. Certain programs require some code or data to always be resident in conventional memory. Overlays do not provide any method of identifying code or data that permanently resides in conventional memory. When blocks A 203 and B 205 are swapped, they are swapped in entirety. No portion of block A 203 remains in memory region 112. Therefore, such programs cannot use the overlay scheme. Some TSR's cannot be converted to overlays. These TSR's include those that provide internal DOS functionality. One example of this is NetWare DOS Client software produced by Novell, Inc. of Provo, Utah.
Thus, the prior art does not provide a system for reducing the amount of conventional memory used by TSR's without sacrificing performance. SUMMARY OF THE PRESENT INVENTION
The present invention provides DOS executable programs that are constructed so that they can cooperate with other similarly constructed programs in sharing memory used by portions of their code or data. This reduces the overall requirements of conventional memory. In the present invention, these programs are referred to as modules and can be application programs, TSR's or any other executable file which are converted into this module format. The modules include transient code and data that may be swapped out of conventional memory, and global code and data that is not swappable and stays resident in conventional memory.
By providing global data blocks that remain resident in memory, modules of the present invention can call other modules, as well as interrupt handlers and other code or data that is externally or asynchronously available. The present invention thus supports more programming interfaces than overlays.
The present invention reduces conventional memory requirements by allowing modules to share a single block of conventional memory. Regardless of the number of modules, the size of the block of conventional memory allocated for their use remains the same. The memory block allocated is of a size large enough to store the largest block of transient code or data of the modules that share the conventional memory block. Instead of swapping transient blocks of modules to and from disk storage, the transient blocks are swapped between conventional memory and extended or expanded memory. This significantly reduces transfer time compared to overlay schemes, improving performance. BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a memory map illustrating the memory organization of a typical computer system.
Figures 2A and 2B illustrate a prior art overlay scheme.
Figure 3 illustrates the module structure of the present invention.
Figure 4A - 4E illustrate operation of the memory manager.
Figure 5 is a flow diagram illustrating the pre-init operation of the invention.
Figure 6 is a flow diagram illustrating the real-init operation of the invention.
Figure 7 is a flow diagram illustrating the calling of a module in the invention.
Figure 8 is a block diagram of a computer system for implementing the invention.
Figure 9 illustrates a VMCB data block. DETAILED DESCRIPTION OF THE PRESENT INVENTION
A method for providing more efficient use of conventional memory in a computer system is described. In the following description, numerous specific details, such as type of computer system, memory address locations, amounts of memory, etc., are described in detail in order to provide a more thorough description of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without these specific details. In other instances, well-known features have not been described in detail so as not to unnecessarily obscure the present invention.
The present invention reduces conventional memory requirements by providing a method and apparatus for TSR's and other programs to share space in conventional memory. The TSR's and programs are configured as "modules" and a module manager controls the swapping of modules into and out of conventional memory. In the preferred embodiment of the invention, only one module is resident in conventional memory at any one time. The other modules are stored in extended memory or expanded memory. This avoids time consuming and performance reducing disk swaps.
The operation of the module manager is illustrated in Figures 4A-4E. Figure 4A illustrates a memory map of conventional, reserved, extended and expanded memory. Conventional memory 102 includes DOS storage region 202 at the lower memory area. Just above the DOS region 202 in the convention memory 102 is space reserved for the module manager 401. Above the module manager 401 is a region reserved for global code and data of any modules available to the system. Stored above the global code and data region 402 is a region reserved for transient blocks of the modules. The transient block module 403 has an upper boundary 404 which is fixed and determined by the largest transient block of any module in the system. The space in conventional memory 102 above upper boundary 404 and below reserve memory 103 is available for other applications and processes.
Reserve memory 103 is used for ROMs and for expanded memory 406. Above reserved memory 103 is extended memory 111 (above 1024 Kbytes). The extended memory 111 stores the transient block 405 of a module referred to as module A. The expanded memory 406 includes the transient blocks 407 and 408 of modules referred to as modules B and C, respectively.
After the global code and data have been stored in global region 402 and the upper bound 404 of transient block 403 has been defined, the module manager is ready to access modules as they are called. Referring to Figure 4B, consider the case where module B is requested. Module B may include code permanently resident in the global region 402. When module B is called, the module manager transfers the module B transient block 407 from expanded memory 406 into the transient block 403 of conventional memory 102. Any transient block of other modules is swapped out of conventional memory at this time. The process calling module B then can access it as if module B were always resident in conventional memory. As seen in Figure 4B, the transient block 407 of module B does not use all of the available address space in transient block 403. However, the upper bound 404 of transient block 403 remains fixed.
Figures 4C-4E illustrate the process of a module calling another module. The transient block 408 of module C is resident in transient block 403 of conventional memory 102 as illustrated in Figure 4C. Module C calls module A through the module manager. The module manager keeps track of the calling module and the destination module. The transient block 408 of module C is swapped with the transient block 405 of module A. Referring to Figure 4D, the transient block 405 of module A is now stored in the transient block 403. Note that the transient block 405 of module A occupies all the address space of transient block 403. After module A executes the function for which it was called, the module manager swaps the transient block 405 of module A with the transient block 408 of module C, as illustrated in Figure 4E.
Module Structure
In the present invention, code and data of modules are structured into different categories so that a general solution is provided for management of the modules in conventional memory. These categories include a function dispatch table (jump table), transient code and data, global code and data, and start-up code and data. The format of a module is illustrated in Figure 3.
The structure of the module 301 includes three regions 302, 303 and 304. Region 302 includes a jump table 305 and a transient group 306. Region 303 contains global code and data and region 304 includes start-up code and data. The transient region 302 includes code and data that is "swappable." That is, this data can be swapped in and out between conventional memory and enhanced memory (extended or expanded). TUMP TABLE
The jump table 306 represents functions supported by the module. The jump table (also referred to as transient code) can be swapped out at any time. Therefore interrupt handlers (or any code segment which can be directly accessed) cannot exist in the transient code segment. The first entry in the jump table is the pointer to the module init routine. The jump table 306 consists of pointers to at least four predefined functions which are common to all modules, namely, init, unload, version, and statistics.
Init: Init is an initialization routine in the module. This initialization routine performs pre-init to obtain the necessary parameters for initializing a module and then performs a real-init to actually load the module into the appropriate segment. It is within this real-init process that vectors are hooked.
Unload: The unload function facilitates release of resources. The unload request can be failed if a check for unload safety (CX = FFFh) returns a critical section status. If a module is well behaved, then there is no reason to fail and a check for unload safety returns a zero, indicating that it is safe to unload for real. A request to unload for real cannot be failed. If a module fails the unload check, then all modules previously checked are notified with an unload cancel (i.e., CX = FFFEh).
Version: The version function obtains the major and minor version of the module. This function provides commonality between modules. Other subfunctions of this version function also establish commonality between modules. The other subfunctions (Olh-OBh) also facilitate inter-module communication (multi-casting) regarding connection establishment, disconnection, etc.
For example, there is a multi-cast call at the first task terminate (i.e.,
_Notify Handler subfunction 01h,PSP Terminate). This subfunction is part of the predefined general function.
Statistics: The statistics function obtains statistics for the module. This function is optional. The statistics are for module debugging. The statistics structure is a length-preceded buffer; the first word of the statistics structure table indicates the number of bytes of valid statistics information. All other statistics information needs to be recorded between the stat size and the stat end fields.
Other functions can be provided specific to the module and follow these pre-defined functions. The terminating zero (DDO) follows the common functions and all other functions defined by the module in the jump table. This lets the module manager know how many functions are supported by the module so that requests beyond this number can be denied.
TRANSIENT DATA
Transient data, like transient code, has specific requirements. Certain types of data cannot exist in the transient group. These restricted types include stacks and data which is passed to another module. For performance reasons, all data that is not global should be in this transient data segment. Transient data is swapped between enhanced memory and conventional memory. GLOBAL CODE AND DATA
The global region 303 stores code and data that must remain resident in conventional memory. For example, code and data that may be stored in the global region 303 include interrupt handlers for global code, far call handlers not accessed through the module manager, stacks, and data passed to other modules. It is possible to create a module with no memory in the global region.
STARTUP REGION
The startup region includes the initialization code. The code provides pre-init and real-init operations. During pre-init, each module provides a module manager with a VMCB structure data block. The modules use the init routine to report the amount of initialization memory required (including global and transient memory segments) through this VMCB structure. Specifically, the VMCB_InitImageParaLength parameter defines the preload total size.
The module manager uses VMCB_InitImageParaLength to determine how much memory to swap in and out during real init and run time. This method of determining module memory allocation avoids confusion and simplifies the process of keeping memory segments separate. Keeping memory segments separate is useful when running global and protected mode data segments simultaneously. If the module is dependent on other modules being loaded, the dependency for the module is checked during the pre-init stage. The fake init flag (AX = -1) is passed on the way into this procedure. During real-init, the module is required to move the global code/ data from its loaded location. The destination of this group is defined in the BX register when -the module manager calls the init function on a real init. After this process, the module must execute any fixups necessitated by the moving of the global group. These fixups are performed to ensure that the data is referencing the new location rather than the group loaded by DOS.
MODULE MANAGER
Overseeing the individual modules and multiplexors is the module manager. Applications make all calls to the module manager, which directs requests to their proper destinations, whether that be another module (child) or multiplexor, the module manager also ensures that replies return to their appropriate callers.
Every module calls other modules via the module manager. Even when a module is calling its particular layer's multiplexor, it makes that call to the module manager, which then calls the particular multiplexor. Likewise, when a call returns from a particular module, that call makes its way from the multiplexor to the particular module via the module manager.
One responsibility of the module manager is ensuring that API calls between the other pieces are properly routed. The module manager must therefore know if the modules required by a given user application have loaded. It polices modules that call other modules. The module manager handles modules and functions that are called asynchronously as well. The module manager provides the APIs to call a function and a module by number. The module manager also knows who is calling that function, via the caller's module ID. Module IDs are pre-assigned. Because the module manager handles all APIs on a number basis, the module manager has no intrinsic dependencies or ties to the individual functions in the modules. Consequently, modules that are not part of the specific Requester model can use the module manager as a TSR memory manager. This is what gives the module manager the capability of supporting varied and diverse TSRs.
The module manager provides the basic services that all modules require, especially those allowing and facilitating calls between all loaded modules. Therefore, the module manager encompasses all layers of the Requester model.
The module manager not only loads and unloads each module, including multiplexors, it also handles memory services (allocation and management) for all modules. The module manager employs memory swapping for its modules.
The module manager decides whether a given module uses expanded memory, extended memory, conventional memory, or any memory type supported, without affecting the modules themselves. The individual child modules are therefore freed from these memory concerns.
Child modules must still conform to certain requirements for memory usage. Once they have done so, they gain all the advantages of having the module manager handle memory mechanisms for them. The module manager is also responsible for loadtime-configuration APIs. Any module may be configurable. For instance, the connection table wants a certain number of connections, or IPX wants to support a number of ECBs, or bigger or smaller buffers. In these instances, the module manager does the work for the module.
Additionally, a user may want to load a module in non-swapped memory, regardless of the memory type being used, in order to optimize performance. Ideally, optimal configuration variations are administered by the module manager. An API is provided for modules that may desire this capability.
Because the module manager cannot be aware of all possible configuration permutations, the Requester includes an API wherein a module can specify its own configuration options, including the tokens used to recognize those options. This is a table-driven API that can be called at startup of a module to parse configuration information from the NET.CFG file.
When the module manager loads, it reads from a file, within the current or other-specified directory, what modules it should load. The module manager loads those modules in the file-specified order.
When loading modules, the current directory is used by default. If you want to load modules from a different directory, you can designate the directory in the module = command in the configuration file. For example:
VLM = C:\NWCLIENT\CONN.VLM
You can also specify a path for the configuration file on the module command line. For example:
VLM /C=CΛNWCLIENT\NET.CFG
A default set of modules is hard-coded into the module manager for basic functionality. However, NET.CFG can be used to override the default options.
Calling the Module Manager
To find the address for the module manager far call handler, the application makes an interrupt 2Fh with 7A20h in the AXregister and BX = Oh. The module manager returns an address in ES:BX that is a pointer to the far call address (the module manager). AX is zeroed out to let the application know the module manager handled the call.
Call-By-Number System:
The module manager uses a call-by-number system which uniquely identifies modules in the system. Fimctions are also called by number. This protocol requires that applications provide three essential pieces of information to the module manager, including CALLERJD, DESTJD, and DEST_FUNC. Each of these three numbers is of size WORD.
CALLER_ID and DESTJD are numbers which uniquely identify a particular module. The module manager uses these numbers to swap in the proper module and dispatch the call appropriately.
DEST_FUNC indicates what function the caller wants to perform. Destination functions are defined within the individual module. These functions are defined by the modules jump table region 305.
These three required elements work together on every function call managed by this system. When an application makes a module call, a far call address for the module manager must be obtained. The following code is an example of how an application retrieves the far call address.
data segment vlmCallAddress dword data ends code segment
ov ax, 7A20h mov bx, 0 int 2Fh or ax, ax jnz NO_VLM mov word ptr vlmCallAddress, bx mov word ptr vlmCallAddress + 2, ES o o o
NO VLM: o o
code ends The following code is an example of a non-module application using the far call address to make a request of _VLM Notify to return the version of the module manager module.
mov ax, 0 push ax ; CALLER ID mov ax, VLM_ID_VLM ;(01h) push ax ; DEST ID mov ax, VLM_NOΗFY ;(01h) push ax , DEST FUNC mov bx, 0 , GET VERSION SUB-FUNC call VLMCallAddress
In summary, the caller first pushes its own ID on the stack. (CALLER JDs are non-zero for modules and 0 for applications.) Then, the caller pushes the destination ID— specifically, the ID of the multiplexor module or of the specific child module it wants to call. Finally, the caller pushes the desired function number.
There are two reserved (or system-only) functions: function zero and function two, used for initialization and unload respectively. Functions one and three are also used consistently across the various modules. One and three are used for a generic notify function with multiple subfunctions and module statistics respectively.
On return, the module manager clears the CALLER JD, DESTJD and DEST_FUNC from the stack, so an application must not do so. This is commonly referred to as the Pascal calling convention. Two registers are used by the module manager: AX and BP. BP is for internal use by the module manager only. Applications should not use BP for any parameter request. BP may be used only to push on the stack the three required values. Applications should use AX for return codes only.
The module manager also provides a method for handling asynchronous calls. The calling function provides a pointer to a block of memory that includes the destination ID, destination function, caller ID, and also any registers that need to be set up for the call. The request can then be put on hold and executed at a later time. When the module manager determines that it can't execute code that is needed, the module manager can put off the execution of the code. The caller of the function can receive control back before the function is actually completed.
Module Manager Pre-Init
A flow diagram of the pre-init operation of the present invention is illustrated in Figure 5. At step 501, the module manager configures itself (NET.CFG, and VLM = files). The configuration could include a list of new or additional modules. The module manager then loads each module on the current list one at a time at step 502. The module is loaded using the load overlay API at step 503. The initialization function is called with the "fake- init" flag on (indicating pre-init) at step 504. The module being loaded provides to the module manager a VMCB data block at step 505. At step 506, the module manager reads the memory requirements of the module, i.e. the initialization memory requirements, any global memory requirements, and transient memory requirements. At step 507, the module manager stores the parameters for the module in a module parameter table. The parameters include initialization memory requirements, global memory requirements (if any), transient memory requirements, and the number of functions of the module.
The module manager proceeds to decision block 508. At decision block 508 the argument "last module?" is made. If the argument is false, there is more module memory information to be obtained and the module manager returns to step 505. If the argument at decision block 508 is true, the memory requirements of all modules have been obtained and the module manager proceeds to step 509.
At step 509 the module manager collects the data from the module parameter table to determine the memory needed for global data. The module manager then allocates address space in conventional memory for storing the global data.
At step 510, the module manager identifies the largest transient memory requirement of the modules that have are to be loaded. The size of the largest transient block defines the size of the transient block 403 in conventional memory. At step 510, the module manager allocates RAM for the transient memory block of a size at least as large as the largest block of the modules to be loaded.
At step 511, the module manager determines what address space can be used for storing the transient blocks of each module. The order of preference is first extended memory, then expanded memory, then conventional memory. The user can configure the module manager to use one type of memory in particular. The module manager determines at step 512 if one type of memory is enabled for use. If a type of memory is not enabled, the module manager selects a memory type using the aforementioned heuristic.
An example of a VMCB data block is illustrated in Figure 9.
Module Manager Real-init
After the pre-init routine, the modules are loaded again with the fake- init flag not set, so that the real-init routine is executed. The operation of the real-init routine is illustrated in Figure 6. At step 601, the module manager sets the fake-init flag to zero. At step 602, the first module is loaded. At step 603, the initialization routine is called. At step 604, any interrupt vectors for the module are hooked. At step 605, any other resources of the modules are allocated. At step 606, global code and data is moved to the allocated address space. At step 607, the transient block of the module is copied into the address space allocated for that module's transient block (in extended, expanded, or conventional memory). At decision block 608 the argument "last module?" is made. If the argument is true, the real-init procedure ends at step 609, where the module manager terminates and stays resident. If the argument at decision block 608 is false, the system returns to step 602.
Module Loading
A flow diagram illustrating module loading is illustrated in Figure 7. At step 701 the module manager receives a call to access a module. At decision block 702, the argument "other request being serviced?" is made. If the argument is true, the module manager stores the current ID at step 703 so that when the exiting call is completed, the new call can be made.
If the argument at decision block 702 is false, the system proceeds to decision block 704 and the argument "DEST ID valid?" is made. If the argument is true, the module manager proceeds to decision block 705. If the argument is false, the module manager returns an error. At decision block 705, the argument "Function # valid?" is made. If the argument is true, the system proceeds to decision block 706. If the argument is false, the module manager returns an error.
At decision block 706 the argument "caller ID = 0?" is made. If the argument is true, the caller ID is replaced with the current module ID and the system proceeds to step 707. If the argument is false, the system proceeds to step 707.
At step 707 any module currently in the transient block is copied to its allocated address space in enhanced memory. Any changes to code or data of the module are consequently updated in the "home" address space (allocated address space of that module). At step 708, the transient block of the called module is mapped in from its allocated address space in enhanced memory. At step 709, the called function number is referenced in the jump table now stored in transient memory. At step 710, the function is called. On return, the caller ID is checked at 711. If necessary, the calling module is mapped into the transient block. At step 712, control is returned to the calling function or process. The above example applies when an application calls a module function. The present invention is also used when the global memory of a module is calling a transient memory function and when a module in transient memory calls the transient block of another module. In a global to transient call, the operation is the same as for the application call of Figure 7. The caller ID of the global requester is 0. For a transient to transient call, the proper caller ID is pushed so that when the caller is mapped back in and control is returned, so that the requesting transient code is actually in memory. Otherwise, it is possible for some other process to get mapped back into memory, which could result in an unstable environment for the computer system.
The present invention may be practiced on any computer system. A typical computer system for practicing the present invention is illustrated in Figure 8. The computer system includes a CPU 801, RAM (main memory) 802, ROM (read only memory) 803, and I/O (input/output) 804 all coupled to system bus 807. The I/O block 804 provides access to other systems such as mass storage 806 through bus 805.
The CPU 801 controls the computer, executes instructions and processes data. The CPU 801 communicates with the other components via the system bus 807. The CPU receives input data from the other components of the computer over the system bus 807 and sends output data to the other components of the computer over the system bus. The system bus 807 usually includes an address bus, a data bus and various other control lines. The width of the address and data buses, as well as the number and type of control lines, varies from one computer system to another. Each component of the computer system, including RAM 802 , ROM 803, and memory mapped I/O 804, contains a number of individual memory locations. To allow the CPU 801 to access these locations, each location is assigned a specific address. Each address is a specific combination of binary values which can be transmitted over the address bus. Since most memory devices include more than one location, addresses for all of the locations of a single memory device are usually assigned as a contiguous block. These blocks are often assigned addresses (mapped into memory) in a contiguous manner, as well. However, there may be gaps of unassigned addresses or addresses reserved for future use.
The computer system of Figure 8 is given as an example only. The present invention may be practiced on any computer system.
Thus, a method and apparatus for memory management is described.

Claims

1. A method of relocating an application from a first range of memory addresses in a computer system comprising the steps of: identifying a first portion of said application that is required to be in said first range of memory addresses as a global block; allocating address space in said first range of memory addresses for storing said global block; identifying a second portion of said application that is not required to be in said first range of memory addresses as a transient block; allocating address space in a second range of memory addresses, not within said first range of memory addresses, for storing said transient block; allocating address space in said first range of memory addresses for temporarily storing said transient block.
PCT/US1994/002523 1993-03-09 1994-03-08 Method and apparatus for memory management WO1994020905A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP94909862A EP0688449A4 (en) 1993-03-09 1994-03-08 Method and apparatus for memory management
JP6520278A JPH08507630A (en) 1993-03-09 1994-03-08 Memory management device and memory management method
AU62537/94A AU6253794A (en) 1993-03-09 1994-03-08 Method and apparatus for memory management

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US2831293A 1993-03-09 1993-03-09
US028,312 1993-03-09

Publications (1)

Publication Number Publication Date
WO1994020905A1 true WO1994020905A1 (en) 1994-09-15

Family

ID=21842743

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1994/002523 WO1994020905A1 (en) 1993-03-09 1994-03-08 Method and apparatus for memory management

Country Status (6)

Country Link
EP (1) EP0688449A4 (en)
JP (1) JPH08507630A (en)
CN (1) CN1090780C (en)
AU (1) AU6253794A (en)
CA (1) CA2157572C (en)
WO (1) WO1994020905A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7143263B2 (en) * 2003-10-16 2006-11-28 International Business Machines Corporation System and method of adaptively reconfiguring buffers
US7603392B2 (en) * 2006-06-05 2009-10-13 International Business Machines Corporation System, method and computer program product for storing transient state information

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4926322A (en) * 1987-08-03 1990-05-15 Compag Computer Corporation Software emulation of bank-switched memory using a virtual DOS monitor and paged memory management
US4943910A (en) * 1987-04-14 1990-07-24 Kabushiki Kaisha Toshiba Memory system compatible with a conventional expanded memory
US5146580A (en) * 1989-10-25 1992-09-08 Microsoft Corporation Method and system for using expanded memory for operating system buffers and application buffers
US5167030A (en) * 1989-08-23 1992-11-24 Helix Software Company, Inc. System for dynamically allocating main memory to facilitate swapping of terminate and stay resident communication program to increase available memory space
US5237669A (en) * 1991-07-15 1993-08-17 Quarterdeck Office Systems, Inc. Memory management method
US5280599A (en) * 1989-01-09 1994-01-18 Kabushiki Kaisha Toshiba Computer system with memory expansion function and expansion memory setting method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5175830A (en) * 1989-06-16 1992-12-29 International Business Machines Corporation Method for executing overlays in an expanded memory data processing system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4943910A (en) * 1987-04-14 1990-07-24 Kabushiki Kaisha Toshiba Memory system compatible with a conventional expanded memory
US4926322A (en) * 1987-08-03 1990-05-15 Compag Computer Corporation Software emulation of bank-switched memory using a virtual DOS monitor and paged memory management
US5280599A (en) * 1989-01-09 1994-01-18 Kabushiki Kaisha Toshiba Computer system with memory expansion function and expansion memory setting method
US5167030A (en) * 1989-08-23 1992-11-24 Helix Software Company, Inc. System for dynamically allocating main memory to facilitate swapping of terminate and stay resident communication program to increase available memory space
US5146580A (en) * 1989-10-25 1992-09-08 Microsoft Corporation Method and system for using expanded memory for operating system buffers and application buffers
US5237669A (en) * 1991-07-15 1993-08-17 Quarterdeck Office Systems, Inc. Memory management method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP0688449A4 *

Also Published As

Publication number Publication date
EP0688449A4 (en) 1997-08-27
CA2157572A1 (en) 1994-09-15
JPH08507630A (en) 1996-08-13
CN1090780C (en) 2002-09-11
CN1120867A (en) 1996-04-17
EP0688449A1 (en) 1995-12-27
AU6253794A (en) 1994-09-26
CA2157572C (en) 1998-12-01

Similar Documents

Publication Publication Date Title
US8307053B1 (en) Partitioned packet processing in a multiprocessor environment
US7313797B2 (en) Uniprocessor operating system design facilitating fast context switching
US5539899A (en) System and method for handling a segmented program in a memory for a multitasking data processing system utilizing paged virtual storage
US4459664A (en) Multiprocessor computer system with dynamic allocation of multiprocessing tasks and processor for use in such multiprocessor computer system
US5634058A (en) Dynamically configurable kernel
US7353361B2 (en) Page replacement policy for systems having multiple page sizes
US5701476A (en) Method and apparatus for dynamically loading a driver routine in a computer memory
US5838968A (en) System and method for dynamic resource management across tasks in real-time operating systems
US4837682A (en) Bus arbitration system and method
CA1266532A (en) Method to share copy on write segment for mapped files
US5835775A (en) Method and apparatus for executing a family generic processor specific application
US5740406A (en) Method and apparatus for providing fifo buffer input to an input/output device used in a computer system
US7840773B1 (en) Providing memory management within a system management mode
US20090100249A1 (en) Method and apparatus for allocating architectural register resources among threads in a multi-threaded microprocessor core
EP2040176B1 (en) Dynamic Resource Allocation
EP1023661A1 (en) Application programming interface enabling application programs to control allocation of physical memory in a virtual memory system
EP1247168A2 (en) Memory shared between processing threads
US5805930A (en) System for FIFO informing the availability of stages to store commands which include data and virtual address sent directly from application programs
US5696990A (en) Method and apparatus for providing improved flow control for input/output operations in a computer system having a FIFO circuit and an overflow storage area
US5893159A (en) Methods and apparatus for managing scratchpad memory in a multiprocessor data processing system
US5928321A (en) Task and stack manager for digital video decoding
US6138210A (en) Multi-stack memory architecture
EP0511769A1 (en) Method and apparatus for processing interrupts in a computer system
US6895583B1 (en) Task control block for a computing environment
WO1990012363A1 (en) Addressing technique for transparently extending data processing system address space

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 94191425.9

Country of ref document: CN

AK Designated states

Kind code of ref document: A1

Designated state(s): AT AU BB BG BR BY CA CH CN CZ DE DK ES FI GB GE HU JP KG KP KR KZ LK LU LV MD MG MN MW NL NO NZ PL PT RO RU SD SE SI SK TJ UA UZ VN

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2157572

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 1994909862

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1994909862

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWW Wipo information: withdrawn in national office

Ref document number: 1994909862

Country of ref document: EP