WO2001018653A9 - Dynamic memory caching - Google Patents
Dynamic memory cachingInfo
- Publication number
- WO2001018653A9 WO2001018653A9 PCT/US2000/024078 US0024078W WO0118653A9 WO 2001018653 A9 WO2001018653 A9 WO 2001018653A9 US 0024078 W US0024078 W US 0024078W WO 0118653 A9 WO0118653 A9 WO 0118653A9
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- memory
- address
- cache
- management
- management table
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/10—Address translation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0875—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with dedicated cache, e.g. instruction or stack
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/10—Address translation
- G06F12/1027—Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
Definitions
- the present invention relates generally to computer memory allocation and management, and more particularly to efficiently managing the dynamic allocation, access, and release of memory used in a computational environment.
- U.S. Patent Number 5,687, 368 (“the '368 patent”) teaches the conventional view of the methods for efficient memory implementation.
- the '368 patent addresses a major shortcoming of the prior art, which is loss of computational performance due to the need for memory management, also called housekeeping, to achieve efficient use of memory.
- the '368 patent teaches the use of a hardware implementation to alleviate the problem of loss of performance in the computational unit.
- the '368 patent does not teach reducing or eliminating housekeeping functions or mapping large, sparsely populated logical memory address space onto smaller, denser physical memory address space as in this invention.
- the '368 patent also does not teach making housekeeping functions more deterministic in the way or to the extent that the present invention does.
- Garbage collection is a term used to describe the processes in a computer which recover previously used memory space when it is not longer in use. Garbage collection also consists of reorganizing memory to reduce the unused spaces created within the stored information when unused memory space is recovered, a condition known as -fragmentation.
- the prior art inherently reduces the performance of the computational unit, due to the need to perform these operations and the time consumed thereby. Further, these operations are inherently not substantially deterministic, since the iterative steps required have no easily determinable limit in the number of iterations.
- relocatable memory work by copying memory atomic units (objects) from one location in memory to another, to allow garbage fragments between valid objects to be combined into larger free memory areas.
- objects memory atomic units
- garbage fragments garbage fragments between valid objects to be combined into larger free memory areas.
- relocatable memory also requires indefinite numbers of iterations, and further makes the time required for housekeeping functions substantially not deterministic. Accordingly, it is desirable to provide a system and method for a dynamic memory manager to overcome these and other limitations in the prior art.
- Caching is a process that stores frequently accessed data and programs in high speed memory local (or internal) to a computer processing unit for improved access time resulting in enhanced system performance.
- Caching relies on "locality of reference,” the statistical probability that if a computer is accessing one area of memory that future accesses will be to nearby addresses.
- a cache gains much of its performance advantage from the statistical probability that if a computer is accessing one part of an object that future accesses will be to other parts of the same object.
- Cache memories are classified by the type of association used to access the data (e.g. direct mapped, set associative, or fully associative), the replacement algorithm (e.g.
- LRU Least Recently Used
- LFU Least Frequently Used
- write algorithm e.g. write back or write through
- Cache memories are typically much smaller than the main system memory.
- the size of a cache memory, type of association, and access statistics of the program(s) executing determine the probability that a piece of data is in the cache when an access to that data occurs. This "hit rate" is a key determinant of system performance. Accordingly it is desirable to provide a system and method for dynamic memory management technology in conjunction with caching techniques to reduce on chip memory requirements for dynamic memory management.
- a system for dynamic memory management maps a sparsely populated virtual address space of memory objects to a more densely populated physical address space of fixed size memory elements for use by a host processor.
- the system comprises an object cache for caching frequently accessed memory elements and an object manager for managing the memory objects used by the host processor.
- the object manager may further comprise an address translation table for translating virtual space addresses for a memory object received from the host processor to a physical space address for a memory element, and a management table for storing data associated with the memory objects and memory elements.
- the address translation table and the management table are stored in the physical system memory.
- the present invention further comprises an address translation table cache for caching the most recently or most frequently used address translation table entries.
- the present invention further comprises a management table cache for caching the most recently or most frequently used management table entries.
- a method for mapping a memory object used by a host processor to a memory element stored in physical memory comprises the steps of receiving a virtual space address for a memory object used by a host processor, determining a physical space address for the memory element or elements in the memory object, and retrieving the memory element from the physical system memory.
- the present invention first checks the object cache to determine whether the memory element has been cached. If the memory element is in the object cache, it is an object cache "hit”. If the memory element is not stored in the object cache, it is an object cache "miss”, and the memory element is retrieved from physical system memory and stored in the cache according to the cache replacement logic.
- Figure 1 is a high level block diagrams of one embodiment of a system in accordance with the present invention.
- FIGS 2A-2C are high level block diagrams of other embodiments of systems in accordance with the present invention.
- Figure 3A is a dynamic memory mapping diagram in accordance with one embodiment of the present invention.
- Figure 3B is another embodiment of the present invention comprising caching associative memories.
- Figure 4 is a block diagram of one embodiment of a Dynamic Memory Cache in accordance with the present invention.
- FIG. 5 is a block diagram illustrating additional details of the management module 404.
- Figure 6 is a flow chart of one embodiment of the main loop process for the control sequencer 414.
- Figure 7 is a flow chart of one embodiment of the initialize process for the control sequencer 414.
- Figure 8 is a flow chart of one embodiment of the allocate process for the control sequencer 414.
- Figure 9 is a flow chart of one embodiment for a release process for the control sequencer 414.
- Figure 10 is a flow chart of one embodiment of the diagnostic process of the control sequencer 414.
- FIG 11 is a block diagram of one embodiment of an aging process for a Least Recently Used (LRU) replacement algorithm.
- LRU Least Recently Used
- Figure 12 is a block diagram of an LRU replacement algorithm implemented using a distributed implementation of an aging circuit.
- Figure 13 is a block diagram of a single distributed oldest circuit.
- Figure 14 is a functional block diagram of one embodiment of an address translation module 402.
- Figure 15 is a block diagram of the address concatenator 410.
- Figure 16 is a flow chart of one embodiment for allocating and releasing a memory object in accordance with the present invention.
- the present invention comprises a Dynamic Memory Cache ("DMC") 102 coupled to a host processor 104 and to other memory 106.
- the host processor 104 has a level 1 cache.
- the other memory 106 may comprise a RAM, ROM, Flash or other memory or may comprise other devices such as a disk, video, network, etc...
- the present invention provides a dynamically allocated memory object (not shown) for use by the host processor 104.
- the memory object comprises a plurality of memory elements or locations in other memory 106.
- the present invention maps the memory object used by the host processor 104 to a plurality of memory elements in the other memory 106.
- the memory elements are memory locations of fixed size in the other memory 106.
- memory elements may be 16 bytes or they may be 64 bytes.
- the DMC 102 manages the memory objects used by the host processor 104 and performs the address translation functions between the host processor 104 and the other memory 106. Memory objects and memory object mappings are described in detail in copending application serial no. 09/203,995 entitled “Dynamic Memory Manager with Improved Housekeeping" by Walter E. Croft and Alex E. Henderson, which application was filed on December 1, 1998, and which application is incorporated herein by reference in its entirety.
- the present invention advantageously allocates memory objects to the host processor 104 from a large sparsely populated virtual memory space and maps the allocated memory objects to a smaller densely populated physical memory space.
- This mapping provides the basis for the removal of dynamic memory housekeeping functions such as "garbage collection”, de-fragmentation, and compaction.
- FIG. 2 there is shown a high level block diagram of another embodiment of a system in accordance with the present invention.
- the present invention comprises a DMC 102 coupled to CPU or host processor 204 and to a bus interface 206 to a separate memory location.
- the DMC 102 further comprises an object manager 208 for allocation, de-allocation, and control of caching of the memory elements, and an object cache 210 for the storage of cached memory elements.
- Figure 2 A also shows a conventional data cache 212, conventional data Translation Lookaside Buffer (TLB), a conventional instructional cache 214, and instruction Translation Lookaside Buffer (TLB) to illustrate the high level similarities between the operation of the DMC with respect to the CPU 204 and the bus interface 206.
- TLB data Translation Lookaside Buffer
- TLB instruction Translation Lookaside Buffer
- FIGS 2B and 2C illustrate various useful combinations of conventional TLB and caching with object management and object caching. These are analogous to conventional combined or "unified" instruction and data TLB and caches and offer the benefits of shared TLB tables and caches while maintaining the benefits of object management and object caching.
- the present invention comprises a host processor virtual address space 304 for storing the memory objects 308A, 308B, and 308C, that are used by the CPU or host processor.
- Each memory object is mapped to one or more memory elements located in the physical system memory 306.
- memory object 308A is mapped to three memory elements and memory object 308B is mapped to one memory element.
- the virtual space address of the memory object 308 used by the host processor is inputted to the DMC 102 for translation by the address translation module 310.
- the address translation module 310 translates virtual space addresses for memory objects 308 to physical space addresses for memory elements.
- the memory element is stored in the object cache 210 and can be accessed using the physical space address for the memory element. If the host processor accesses a memory element not found in the object cache 210, a miss will occur and the object manager 208 will replace entries in the management table, address translation table, and object cache to provide access to the desired object.
- the DMC 102 maintains large software management and address translation tables in physical system memory 306. These large tables allow the management of very large numbers of objects.
- physical system memory 306 maintains four data structures: a memory element table 312, a management table 314, an address translation table 316, and a process table 318.
- the memory element table 312 is a pool of small fixed sized memory areas ("memory elements") used to store data. These memory areas may or may not be sequentially located in memory. In one embodiment, these memory areas may be partitioned into multiple separate pools of memory elements allocated on a per process basis.
- Management table 314 refers to a table or group of tables that store information about the size and address translation table entries of each allocated memory object.
- the management table 314 may be organized as an ANL tree, a hash table, a binary tree, a sorted table, or any other organizational structure that allows for rapid search and insertion and deletion of entries.
- the most frequently used or most recently used management table entries are stored in a management table cache.
- Address translation table 316 refers to a table or group of tables that store the virtual to physical address translation information for each memory element. In one embodiment, a single memory object will typically use several address translation table entries. In a preferred embodiment, the address translation table 316 may be organized as an ANL tree, a hash table, a binary tree, a sorted table, or any other organizational structure that allows for rapid search and insertion and deletion of entries. In another embodiment, the most frequently used or most recently used address translation table entries are stored in an address translation table cache.
- the process table 318 refers to a table sorted by process, program, or thread ID that is used to locate the management table entries for memory objects associated with a particular process, program, or thread. In a preferred embodiment, this table is organized as an ANL tree to allow for rapid search and insertion and deletion of entries.
- Figure 3B there is shown another embodiment of the present invention.
- the embodiment in Figure 3B uses caching associative memories to implement the management table and the address translation table. Caching associative memories are described in more detail in copending U.S. patent application serial number , entitled “Caching Associative Memories" by Alex E. Henderson and Walter E. Croft, which application was filed on August 10, 2000 and which application is incorporated herein by reference in its entirety.
- the management table 326 is stored in a main associative memory and the address translation table 324 is stored in a main associative memory.
- the most frequently used or most recently used management table entries are stored in a management table associative memory cache 322.
- the most frequently used or most recently used address translation table entries are stored in an address translation table associative memory cache 320.
- Associative memory caches have replacement logic to manage the replacement of cached data as explained in U.S. patent application serial no. .
- the operating system, supervisor, or system process then dynamically allocates space for a new management table entry (an object belonging to the system process) and as address translation table entries (also belonging to the system process) as required to describe the requested object.
- the user process can then access the new memory object.
- Deallocation is the reverse process of deallocating the system objects used for the address translation and management table entries.
- FIG. 4 there is shown a block diagram of one embodiment of a
- the DMC 102 in accordance with the present invention.
- the DMC 102 comprises an address translation module 402, a management module 404, and an object cache 406.
- the address translation module 402 and management module 404 communicate directly with the CPU or host processor, and are coupled to the object cache 406 via data bus 408.
- the management module 404 manages the object cache 406 and address translation module 402 for the DMC.
- the management module 404 preferably comprises a control sequencer 414, management registers 416, and a management table cache 418.
- Control sequencer 414 scans the CPU registers (not shown) for host processor commands, executes valid commands, and loads results for the host processor 104.
- Management table cache 418 contains an entry for each memory object active in the DMC 102.
- the address translation module 402 translates the CPU virtual space address for a memory object to a physical memory space address for a memory element.
- the address translation module 402 comprises an address concatenator 410 and an address translation table cache 412.
- the address franslation table cache 412 performs the content addressable memory (“CAM") lookup of object base address and object block index bits of the host processor virtual space address for the memory object, as described in more detail with reference to Figure 14. If a valid cache entry exists for the physical address of the memory element, the address translation table cache 412 provides a cache address and physical memory address.
- CAM content addressable memory
- the address translation table cache 412 contains memory element information comprising an object base address, which is known to the management table cache 418, an object block index, which is a secondary portion of the base address, a link to the next object base address/block index pair, a link back to the management table 418 entry for this object, an address of segment in cache, and an address of segment in system memory.
- the address concatenator 410 receives the address of the segment in cache from the address translation table cache 412.
- the address concatenator 410 also receives pass through low order bits of the host process address.
- the address concatenator 410 then concatenates the cache address and pass through low order bits and generates the cache memory address for the object cache 406.
- the object cache 406 provides a fast local memory used to store frequently accessed memory element data.
- the cache replacement logic for object cache 406 selects the cache line or lines to be replaced in case of management table cache 418 or address translation table cache 412 misses.
- the object cache 406 uses a Least Recently Used ("LRU") replacement algorithm.
- the object cache 406 may include a write buffer to implement a delayed write of altered object data to other memory 106.
- the write may be a single word for write through caching or a complete object cache line buffer for write back caching. Write back and write through may be a selectable mode.
- optional object cache coherency logic may be used for monitoring system bus writes by other devices to shared objects.
- the coherency logic may implement any of the classical bus snooping and cache coherency schemes.
- Management table cache 418 may also contain optional user and system data.
- LRU Least Recently Used
- Figure 5 also shows an example of three dynamically allocated memory objects of varying size added after DMC initialization.
- the object start address 502 and the object size 504 of the three memory objects define the location and extent of the memory objects in the virtual address space of the process specified by the process ID 506.
- Object number field 510 provides the index to the management table 314.
- Age and Dirty Flag 508 and object number 510 are used to implement a LRU replacement algorithm.
- all ages 508 are set to zero and dirty flags 508 are cleared by a system reset.
- the entry with the largest object number 510 is replaced.
- Management registers 416 provide working data for the DMC. These registers contain information about the address translation module 402 and the management module 404. The management registers 416 contain results of host processor commands that are returned to the host via the user registers. Management registers 416 comprise a set of permanent registers 512 and temporary registers 514.
- the permanent registers 512 contain information such as the maximum size of a memory object, the number of free entries in the management table cache 418, a pointer to the next free entry in the management table cache 418, the number of free entries in the address translation table cache 412, and a pointer to the next free entry in the address translation table cache 412.
- the permanent registers 512 are initialized at power on and reset.
- Temporary registers 514 contain information such as the memory size requested, the calculated number of address translation table cache entries, and pointers, counters, etc... .
- FIG. 6 there is shown a flow chart of one embodiment of the main loop process for the control sequencer 414.
- This process is started by a system reset. After the system reset, the initialize process initializes the DMC. After initialization is complete, the control sequencer pools the device control register for a command. When a command is detected, the busy indication is set in the device status register 606. The command is decoded to determine which sub process should run. If no valid command is found, the command error bit in the device status register is set 626, otherwise the command results bits in the device status register are set 624 on sub process completion. The busy indication in the device status register is then cleared 628 and the contents of the user registers are available 230 to the CPU.
- FIG. 7 there is shown a flow chart of one embodiment of the initialize process for the control sequencer 414.
- the process starts at 702 and builds a free list of address translation table cache entries 704.
- the process then builds a free list of management table cache entries 706.
- the process initializes the management registers 708 and ends at 710.
- FIG. 8 there is shown a flow chart of one embodiment of the allocate process for the confrol sequencer 414.
- the process starts at 802 and determines 804 whether a management table cache entry is free. If an entry is not free, the device status register is set to indicate allocate an error 806 and the process ends 818. If an entry is free, the process then determines 808 whether an address translation table cache entry is free. If an entry is not free, the device status register is set to indicate an allocate error 806 and the process ends 818. If an entry is free, the process gets an entry from the management table cache free list and adds the management table cache entry 810. The process then gets entries from the address translation table cache free list and adds and links address translation table cache entries 812. The process then updates 814 the management registers. Finally, results of the allocate are stored in the device status register and the allocated object is available for use 816.
- FIG. 9 there is shown a flow chart of one embodiment for a release process for the control sequencer 414.
- the process starts at 902 and determines 904 whether the management table cache entry has been found. If the answer is no, the device status register indicates a "release error" 906 and ends at 918. If the management table cache entry is found, the process then determines 908 whether the address translation table cache entries can be found. If the answer is no, the device status register indicates a "release error" 906 and the process ends 918. If the answer is yes, the process deletes the management table cache entry and returns the entry to the management table free list 910. The process then deletes the address translation table entries and returns the entries to the address translation table free list 912. Afterwards, the process updates 914 the management registers. The device status register then indicates 916 the release results and indicates that the released object is not accessible.
- the diagnostic process provides software access to the internal data structures of the DMC for software diagnostics.
- Sub commands are provided to read and write the Address Translation Table cache 412, Management Table cache 418, and Management Registers 416. These commands are decoded by decisions 1002.
- the parameters for these commands are validated by the decisions 1004. If either a bad sub command or invalid parameter is detected the diagnostic error indication in the device status register is set. If the sub command and parameters are valid, the read or write function 1006 is executed and the read or write result set in the device status register is set 1010.
- the entry match logic compares 1102 the process ID and virtual address from the CPU with the values stored in the management table cache process ID, object start address 502 and object size 504. If there is a match a management table cache hit has occurred and the ages of the management table cache entries must be updated.
- the age process 1104 works as follows: The age of the management table cache entry for which the hit occurred is driven 1106 on the current age bus. The age of any entry with an age greater than the current age is decremented. The age of the management table cache entry for which the hit occurred is set to the number of management table cache entries minus one. The other age entries are unchanged.
- FIG. 12 there is shown an implementation of the age update process where the comparison of each management table cache entry's age is compared to the current age by duplicated compare circuits 1202. These circuits determine which entries ages should be decremented, which should stay the same (no operation or no-op) and which one should be loaded with the total number of management table entries minus one.
- Figure 13 there is shown a block diagram of implementation of a distributed compare circuit. The row with a hit drives the current age bus. All rows compute the greater than and equal to signals.
- the address translation module 402 comprises an address concatenator 410 and an address translation table cache 412.
- the address translation table cache 412 comprises a content addressable memory (“CAM") 1402 for enabling fast searches and associated data 1404 for providing entry specific information.
- CAM content addressable memory
- a CAM and associated data are not the only suitable devices for an address translation table but that any type of associative memory, which allows searches based on content as opposed to address location, may be used for the address translation table cache 412, and that the description here of a CAM and associated data are for illustrative purposes only.
- the operation of the address translation module 402 is as follows.
- the host processor addresses 1406 are placed on the host processor address bus 1406 and are detected and used as input to the address translation module 402.
- the DMC address range is a 32 bit address range with the high-order 26 bits being utilized for translation and the low- order 6 bits being passed on directly.
- the passed on 6 bits define a maximum segment offset size of 64 bytes.
- a new management table cache entry may also be required.
- a search on the CAM 1402 using the Match Data 1408 results in a match, a corresponding match signal 1414 for the CAM entry is asserted for specifying a particular entry in the associated data 1404.
- Individual entries in the associated data 1404 that comprise a single memory element are linked together by a link field 1416. Unused entries are part of the address translation table cache free list. Active entries in the associated data 1404 also have a management table link 1418 for providing a link to the management table cache 418. Unused links are nullified. If a link field 1416 is NULL, signaling that this is the final segment of this memory element, the management table link is used to determine memory object size 504 in bytes.
- the valid byte length of the ending segment can be calculated by the modules of the object size 504 by the memory element size. The remainder of bytes in the last memory element will range from 1 to the memory element size. In one embodiment, only part of the addresses in this ending segment may be valid. If part of the addresses are invalid, an invalid address bus error is generated to alert the host processor.
- Translated cache addresses are stored in the mapped address field 1420. Translated cache addresses are determined during initialization and are treated as read-only data elements during operation of the present invention. The cache address 1420 associated with the match data 1408 search are then passed to the address concatenator 410. Thus, validated host processor addresses 1406 enable the mapped address to be concatenated with the pass through low-order 6 bits of the host processor address 1406 to form the translated cache memory address, thereby providing access to the memory object in the cache memory.
- the subdivision of the host processor address 1406 into bits used for translation and pass through bits is not limited to the examples provided here but may be subdivided as necessary or desired for utilization of the invention.
- the low order 16 bits may be used for translation and the high order 16 bits may be used for passing through to the concatenator.
- the translated bits (Bits N+l-L) are then retrieved from the address translation table 412 as described with reference to figure 14 and concatenated with the pass through bits (Bits 0-N).
- the newly concatenated translated bits (Bits N+l-L) with the pass through bits (Bits 0-N) are then sent to the managed address space 1506.
- management table cache entry 1 is added at the bottom of the address translation table cache 412.
- the base address field 1410 for this entry starts at 80000000 hexadecimal or 2 31
- the block index field 1412 starts at 0 and increases by 100 hex (256 bytes).
- the management table memory allocate size field shows a memory object of 514 bytes. 514 bytes fits in three 256 byte segments that are connected by the link field 1416 with values of 1, 2, and NULL to end the list of segements.
- the translated cache memory address 0, 100, and 200 hex are the cache memory addresses 1420 for the 514 byte memory object.
- the translated cache addresses are on 256 byte boundaries at offsets 0, 256, and 512 bytes respectively.
- Management table cache entry 2 is added above management table cache entry 1 in this example.
- the base address starts at 80010000 hex which is 65,536 bytes above the start address for management table cache entry 1.
- this sets the maximum individual memory object size of 65,536 bytes built from 256 address translation table entries.
- a memory cache object is allocated by first creating or removing 1602 a management table cache entry for the object in the management table cache 418 for the currently executing process, program, or thread. Then, the address translation cache entries for the memory element in the address translation table are created or removed 1604 for the currently executing process, program, or thread. Finally, the new address translation table cache entries are pointed 1606 at the memory allocated from the memory element pool. Alternatively, the allocated memory may be returned to the memory element pool.
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP00961477A EP1222546A1 (en) | 1999-09-07 | 2000-09-01 | Dynamic memory caching |
AU73423/00A AU7342300A (en) | 1999-09-07 | 2000-09-01 | Dynamic memory caching |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15268099P | 1999-09-07 | 1999-09-07 | |
US60/152,680 | 1999-09-07 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2001018653A1 WO2001018653A1 (en) | 2001-03-15 |
WO2001018653A9 true WO2001018653A9 (en) | 2002-09-12 |
Family
ID=22543939
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2000/024078 WO2001018653A1 (en) | 1999-09-07 | 2000-09-01 | Dynamic memory caching |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP1222546A1 (en) |
AU (1) | AU7342300A (en) |
WO (1) | WO2001018653A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6859868B2 (en) | 2002-02-07 | 2005-02-22 | Sun Microsystems, Inc. | Object addressed memory hierarchy |
US11119915B2 (en) | 2018-02-08 | 2021-09-14 | Samsung Electronics Co., Ltd. | Dynamic memory mapping for neural networks |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5396614A (en) * | 1992-06-25 | 1995-03-07 | Sun Microsystems, Inc. | Method and apparatus for a secure protocol for virtual memory managers that use memory objects |
US5442766A (en) * | 1992-10-09 | 1995-08-15 | International Business Machines Corporation | Method and system for distributed instruction address translation in a multiscalar data processing system |
US5729710A (en) * | 1994-06-22 | 1998-03-17 | International Business Machines Corporation | Method and apparatus for management of mapped and unmapped regions of memory in a microkernel data processing system |
-
2000
- 2000-09-01 AU AU73423/00A patent/AU7342300A/en not_active Abandoned
- 2000-09-01 EP EP00961477A patent/EP1222546A1/en not_active Withdrawn
- 2000-09-01 WO PCT/US2000/024078 patent/WO2001018653A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
AU7342300A (en) | 2001-04-10 |
EP1222546A1 (en) | 2002-07-17 |
WO2001018653A1 (en) | 2001-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6446188B1 (en) | Caching dynamically allocated objects | |
US5630097A (en) | Enhanced cache operation with remapping of pages for optimizing data relocation from addresses causing cache misses | |
US7793049B2 (en) | Mechanism for data cache replacement based on region policies | |
JP2554449B2 (en) | Data processing system having cache memory | |
US6381676B2 (en) | Cache management for a multi-threaded processor | |
US7085890B2 (en) | Memory mapping to reduce cache conflicts in multiprocessor systems | |
KR100637610B1 (en) | Cache replacement policy with locking | |
US6640283B2 (en) | Apparatus for cache compression engine for data compression of on-chip caches to increase effective cache size | |
JP3795985B2 (en) | Computer memory system contention cache | |
US5651136A (en) | System and method for increasing cache efficiency through optimized data allocation | |
US5509135A (en) | Multi-index multi-way set-associative cache | |
US5717893A (en) | Method for managing a cache hierarchy having a least recently used (LRU) global cache and a plurality of LRU destaging local caches containing counterpart datatype partitions | |
US5813031A (en) | Caching tag for a large scale cache computer memory system | |
US6795897B2 (en) | Selective memory controller access path for directory caching | |
US6912623B2 (en) | Method and apparatus for multithreaded cache with simplified implementation of cache replacement policy | |
US20080215816A1 (en) | Apparatus and method for filtering unused sub-blocks in cache memories | |
EP0780769A1 (en) | Hybrid numa coma caching system and methods for selecting between the caching modes | |
US7020748B2 (en) | Cache replacement policy to mitigate pollution in multicore processors | |
EP1532532A1 (en) | Method and apparatus for multithreaded cache with cache eviction based on thread identifier | |
US7237067B2 (en) | Managing a multi-way associative cache | |
US6553477B1 (en) | Microprocessor and address translation method for microprocessor | |
JP3262519B2 (en) | Method and system for enhancing processor memory performance by removing old lines in second level cache | |
US7007135B2 (en) | Multi-level cache system with simplified miss/replacement control | |
US7237084B2 (en) | Method and program product for avoiding cache congestion by offsetting addresses while allocating memory | |
WO2001018653A9 (en) | Dynamic memory caching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CU CZ DE DK EE ES FI GB GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2000961477 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 2000961477 Country of ref document: EP |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
AK | Designated states |
Kind code of ref document: C2 Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CU CZ DE DK EE ES FI GB GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: C2 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
COP | Corrected version of pamphlet |
Free format text: PAGES 1/19-19/19, DRAWINGS, REPLACED BY NEW PAGES 1/18-18/18; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE |
|
NENP | Non-entry into the national phase |
Ref country code: JP |