WO1988009014A2 - Memory addressing system - Google Patents

Memory addressing system Download PDF

Info

Publication number
WO1988009014A2
WO1988009014A2 PCT/US1988/001388 US8801388W WO8809014A2 WO 1988009014 A2 WO1988009014 A2 WO 1988009014A2 US 8801388 W US8801388 W US 8801388W WO 8809014 A2 WO8809014 A2 WO 8809014A2
Authority
WO
WIPO (PCT)
Prior art keywords
memory
information
address information
identification
memory address
Prior art date
Application number
PCT/US1988/001388
Other languages
French (fr)
Other versions
WO1988009014A3 (en
Inventor
Joseph Michael Sekel
Donald James Girard
Original Assignee
Ncr Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ncr Corporation filed Critical Ncr Corporation
Publication of WO1988009014A2 publication Critical patent/WO1988009014A2/en
Publication of WO1988009014A3 publication Critical patent/WO1988009014A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
    • G06F12/1045Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache
    • G06F12/1054Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache the data cache being concurrently physically addressed

Definitions

  • This invention relates to memory addressing systems of the kind including data processing means adapted to control memory access operations; cache memory means adapted to provide relatively fast access to data; and second memory means adapted to provide relatively slow, large-scale storage.
  • This invention also relates to a method for utilization of translated memory address information in a memory system having a fast cache memory and a slower second memory.
  • U.S. Patent No. 4,386,402 discloses a memory addressing system including a processor, main memory means, buffer memory means and address translation means, in which the buffer memory includes a cache memory portion and an interrupt stack portion.
  • the known memory addressing system has the disadvantage of limited speed of operation resulting from the time taken to effect address translation.
  • a memory addressing system of the kind specified, characterized by: translating means coupled to said data processing means and adapted to translate information including memory address information received from said data processing means into a different form for accessing said cache memory means and said second memory means; storage means adapted to store first memory address information translated by said translating means during a first cycle of operation of the system; means for accessing the first memory address information from said storage means and utilizing it during a second cycle of operating during which second memory address information is translated by said translating means; and comparison means adapted to compare said first and second memory address information, whereby said cache memory means is enabled to be accessed using said first memory address information if said first and second memory address information are found to be identical by said comparison, and said second memory means is caused to be accessed using said second memory address information if said first and second memory address information are found not to be identical by said comparison.
  • a method of the kind specified characterized by the following steps: storing first memory address information during a first cycle of operation, translating information including second memory address information during a second cycle of operation; commencing a memory access operation of the cache memory during the second cycle of operation using said first memory address information while said second memory address information is being translated; comparing said first and second memory address information when the translation of said second memory address information has been completed; enabling an access operation of the cache memory if the first and second memory address information are identical; and performing an access operation of the second memory using said second memory address information if the first and second memory address information are not identical.
  • FIG. 1A and 1B taken together, constitute a block diagram of a system embodying the present invention.
  • Fig. 2 is a block diagram of cache memory control circuitry.
  • Fig. 3 is a diagram showing the format of an address for addressing the cache memory.
  • Fig. 4 is a map showing the manner in which Figs. 5A, 5B, 5C and 5D are combined to provide a detailed diagram of certain of the system circuitry.
  • Figs. 5A, 5B, 5C and 5D together constitute a detailed diagram of certain of the system. circuitry.
  • Figs. 6A and 6B; 7A and 7B; 8A and 8B; 9A and 9B; 10A, 10B, 10C and 10D; 11A and 11B; 12A and 12B; 13A and 13B and 14 constitute additional detailed diagrams of certain portions of the system circuitry.
  • Fig. 15 is a diagram showing waveforms of certain signals utilized in the system.
  • Fig. 16 is a map showing the manner in which Figs. 10A, 10B, 10C and 10D are combined to provide a detailed diagram of certain of the system circuitry.
  • FIGs. 1A and 1B taken together, constitute a block diagram of a data processing system embodying the present invention.
  • a microprocessor, or MPU 20, which may be of type 68010, manufactured by Motorola, Inc., is the main processor for the system in which the present invention is embodied, and controls terminals and other devices of the system. It outputs addresses to memory and receives instructions and data in response which enable it to exert the necessary controls.
  • the microprocessor 20 is physically located on a circuit board, which is electrically coupled, by an interface or back plane represented by line 23, to a second circuit, also located on the circuit board, which includes components such as a cache memory 26 which comprises a data memory 28 and a tag, or identification, memory 30.
  • Each data location in memory 28 is associated with a corresponding tag location in memory 30, which provides an identification or label for the associated data.
  • the same address is used for accessing both the data memory 28 and the tag memory 30, and the tag information which is accessed is compared to tag information provided by the processor 20 to assure that the data from the memory 28 is actually the data sought. If the comparison is correct, the corresponding data is then used by the system for the intended purpose. If the comparison is not correct, then the newly provided tag information is substituted in the tag memory 30 for the previous tag information, and at the same time, new data corresponding to the newly provided tag information is read from the main memory 19 and provided over bus 29 to the data memory 28, into which it is written, to be available for possible future rapid access. It should be noted that in every cache memory access operation, the main memory 19 is also accessed in parallel, so that if the desired data is not in the cache memory 26, it can be obtained from the main memory 19 with minimum delay.
  • the memories 28 and 30 may be static random access memories, 4K by 16 in capacity.
  • a cache memory is a relatively small memory in which may be stored data which is frequently used or otherwise needed, in order to provide more rapid access than can generally be obtained from a main memory, such as the memory 19, shown in block form in Fig. 1B, which is coupled to the rest of the circuitry at interface 23.
  • main memory such as the memory 19, shown in block form in Fig. 1B, which is coupled to the rest of the circuitry at interface 23.
  • main memory such as the memory 19, shown in block form in Fig. 1B
  • cache memories are more expensive than main memories per storage location, and are therefore generally much smaller, so that their use is limited to essential data, which may be changed from time to time as needed during an operation.
  • information which has already been used in an operation is selectively placed in cache memory, since this same information may be needed again during the operation, and can be accessed more quickly from the cache memory than from the main memory.
  • Accessing of the cache memory 26 by the MPU 20 is accomplished by means of a 23-bit address, the format 32 of which is shown in Fig. 3, with bits 1-23 extending from the right to left in the figure.
  • the format 32 includes a 12-bit index or address portion 34 which includes bits 11 and 12, comprising the most significant bits of the index portion, designated as prediction bits 36 and 38, and also includes an 11-bit tag portion 40, which is used for checking or identification purposes. The manner in which the address and tag portions are employed will be subsequently described.
  • Address bits 1-10 inclusive are transmitted from MPU 20 over lines LA1-LA10 in a bus 42 to a buffer 44, which may be of type F244, and which enables the signals on the lines, now designated BA1-BA10, to cross the back plane 23, via a bus 45, to a bus receiver 46, which also may be of type F244, on board 22.
  • the address lines extend from the receiver 46, and are now designated CA1-CA10, in a bus 48. These lines are joined by lines CA11 and CA12 in a bus 50, and are applied to both the data memory 28 and the tag memory 30 of the cache memory 26.
  • the generation of the address bits on lines CA11 and CA12 will subsequently be described in detail. Due to the particular parameters and characteristics of the system, the address bits CA11 and CA12 cannot be included with the address bits CA1-CA10, but must be grouped for processing with the tag bits A13-A23. It will be noted that since the address bits CA11 and CA12 are the most significant bits of the index address, they are the bits least likely to change from one operation to the next, and therefore the bits whose content is easiest to predict on the basis of probability.
  • each cache memory address and tag word is generated by the MPU 20 in either logic address form, if a memory management unit (MMU) 52 on the board 22 is to be used, or in physical address form, if the MMU 52 is to be bypassed, as can be done under control of the MPU 20.
  • MMU 52 is a device which is extremely slow in operation compared to the operating speed of the remainder of the system, and which is used for the translation of logical addresses from the MPU 20 into physical addresses for accessing the cache memory 26 or the main memory 19. It is employed in connection with user programs, as distinguished from supervisory programs, in which the physical addresses in the system have not been established, and only relative, or logical, addresses have been provided. Use of logical addresses gives programmers greater latitude, and enables the use of a smaller main memory than would otherwise be required.
  • the MMU 52 is to be bypassed and not used.
  • the address lines A11-A23 which include the tag portion 40 and prediction bits 36, 38 are in physical, as distinguished from logical, form and are carried in buses 54 and 56 to a gate 58, which may be of type F244.
  • the gate 58 is "turned on” only if the MMU 52 is not used, and said gate and bus 56 are used only in such a situation.
  • the gate 58 then passes the lines A11-A23 to a bus 60 which is coupled to a bus 62 extending between a latch 64, which may be of type F373, and a bus driver 66, which may be of type F244.
  • An MMU mode control signal MMUENBL/ is applied to the latch 64 on line 111.
  • a first bus 68 carries address lines BA11 and BA12 across the interface 23 to a second bus 70 which carries said lines to a receiver 72, which may be of type F244.
  • a branch 74 from the bus 70 carries the lines BA11 and BA12 to a flip-flop 76, which may be of type LS374.
  • An MMU mode control signal MMUENBL/ is applied to the flip-flop 76 on line 111.
  • a second branch 78 from the bus 70 carries the lines BA11 and BA12 to a comparator 80 which may be of type 74S86.
  • a "correct guess" output 81 extends from the comparator 80, for a purpose to be subsequently described.
  • a control 82 extends from the gate 58 across the interface 23 to the receiver 72 to enable the receiver to pass the address information carried on the lines BA11 and BA12 when the gate 58 is turned on, but to block said information when said gate 58 is turned off. Assuming that said gate 58 is turned on, the BA11 and BA12 information is carried on buses 84 and 86 to be combined with the information on lines CA1-CA10 to provide lines CA1-CA12 which are applied to the memories 28 and 30 to read out data and tag information, respectively.
  • tag information is carried on the tag address lines BA13-BA23 from the bus driver 66 over a bus 88 across the interface 23 to a receiver 90, which may be of type F244.
  • the tag information on these lines BA13-BA23 is carried from the receiver 90 over buses 92 and 94, respectively, to a gate 96, which may be of type F244, and to a tag comparator 98 , which may be of type F521.
  • a "tag replace" input 100 is also applied to the gate 96, for a purpose to be subsequently described.
  • An output bus 102 from the gate 96 extends to and interconnects with a bi-directional bus 104 which carries information on lines BA13-BA23 both to the tag memory 30 and to the tag comparator 98.
  • a "HIT" output line 106 is connected to the tag comparator 98, for a purpose to be subsequently described.
  • the tag information which is applied to the tag comparator 98 from the receiver 90 is compared there to the tag information which has been read out of the tag memory 30 and carried to the comparator 98 by the bus 104. If the comparison is correct, a "HIT" signal appears on line 106, which generates a DTACK signal and causes the data read out from the data memory 28 to be utilized under control of the MPU 20. On the other hand, if the comparison of the tag information on lines BA13-BA23 does not correspond to the tag information read out of the tag memory 30, a contrary signal appears on the line 106.
  • the cache memory 26 is thus used to provide data when the proper correspondence of the tag information is present, and is updated with new data from main memory when such correspondence of tag information does not exist.
  • the address All-A23 provided by the MPU 20 on the bus 54 is a logical address, rather than a physical address, and the gate 58 is turned off, or closed, blocking the bypass path previously described through said gate and the bus 60.
  • the bus 54 carries the logical address A11-A23 to the MMU 52, where the address is translated into a physical address and applied to the latch 64 via a bus 108.
  • the address bits 11 and 12 are applied to the flip flop 76 and stored therein, so that the previous address bits are retained when the next translating operation commences.
  • the bits stored in the flip flop 76 will be spurious.
  • the flip flop 76 is accessed and the bits 11 and 12 stored therein are assumed to be the correct address bits for the current address and are carried over bus 86 to combine with address bits CA1-CA10 to form the total address for reading out the data memory 28 and the tag memory 30.
  • This enables the tag information to be read out sooner than would be the case if the system waited for the MMU 52 to complete the translation of address bits All and A12.
  • the tag information TA13-TA23 which has been read out can thus be transmitted immediately over the bus 104 to the comparator 98 for comparison with the translated tag information entering the comparator 98 via the bus 92. If the comparison is not correct, an appropriate signal is provided, and the cache memory 26 is not used, the data being taken from the main memory 19 for this cycle. In addition, the tag information in the cache memory is replaced, as previously described.
  • address bits All and A12 represent the most significant bits of the index address, the likelihood is high that they will remain the same from one cycle to the next, and considerable time in an operating cycle can be saved by operating on the assumption, or prediction, that these bits will not change, and proceeding immediately with read-out of the cache memory 26, without waiting for the relatively slow MMU 52 to complete the translation of these two bits from logical to physical address.
  • FIG. 2 Shown in Fig. 2 is a block representation of the cache control circuitry 110 and an associated mode control circuit 136 utilized for controlling certain of the elements represented by blocks in Figs. 1A and 1B, together with representations of certain of the signal lines appearing in Figs. 1A and 1B, to which corresponding reference characters have been assigned in Fig. 2.
  • Inputs to the cache memory control circuitry 110 are shown at the top (representing signals which are associated with the prediction circuitry) and on the left side (representing signals which are related to cache control) of the block 110, while outputs from the cache control memory circuitry are shown at the bottom (representing signals associated with the flip flop 76 and the receiver 72) and on the right side (representing signals which are related to cache control) of the block 110.
  • the MMU mode control circuitry 136 controls the mode of operation of the MMU 52.
  • the MMU 52 is capable of operating in a first mode in which it is always active; in a second mode in which it is never active; and in a third mode in which it is active under control of the MPU 20 when a logical address is output from the MPU 20 to the MMU 52 for translation.
  • the circuit 136 provides an MMU enable (MMUENBL/) signal on line 111 to the cache control circuit 110 and to the flip flop 76.
  • the MMUENBL/ signal applied to the circuit 110 is essentially inverted and appears on output line 82, by which it is input to receiver 72 as signal NONMMU or NMMU. Shown in Fig. 15 are representations of relative commencement and duration of a number of significant events which take place during a cycle of operation of the system.
  • Waveform A represents the time of operation of the CPU 20 in generating the physical address signals LA1 to LA7, and the signals A8 to A23.
  • Signals A8 to AlO are operated by the system to become physical address signals and are transmitted with signals LA1 to LA7 on bus 42.
  • all of the lines LA1 to LA10 can be grouped as the lower 10 bits of a physical address.
  • Waveform B represents the active time of the index address signals CA1 to CA10 from the receiver 46 and the signals CA11 and CA12 from the flip flop 76.
  • Waveform C represents the time duration of the output of tag memory 30 in providing tag information over the bus 104 to the tag comparator 98.
  • Waveform D represents the time duration of the output of the MMU 52 in providing the physical address signals PAD11 to PAD23.
  • Waveform E shows the signal commencement of the tag comparison performed in comparator 98 to provide a HIT signal on line 106, and also represents the time duration of the tag comparison performed in comparator 80 of the address bits 36 and 38 to provide a "CORRECT GUESS" output on line 81.
  • Waveform F represents the commencement and duration of the "HIT" signal on line 106.
  • Waveform G represents the commencement and the duration of the "CORRECT GUESS" signal on line 81.
  • Waveform H represents the cache DTACK signal provided by the cache memory 26 to the MPU 20 to control the operation of the MPU 20 to cause it to sample the data provided from the memory and subsequently terminate the cycle of operation.
  • Waveform J represents a clock signal on line 112 from the control circuitry which is shaped by a DTACK signal to operate the flip flop 76 to store the predictor bits out of the current address.
  • FIG. 1A, 1B and 2 The block representations of Figs. 1A, 1B and 2, which were presented in general form to facilitate understanding of the invention, are shown in greater detail in Figs. 5A, 5B, 5C, 5D, 6A, 6B, 7A, 7B, 8A, 8B, 9A, 9B, 10A, 10B, 10C, 10D, 11A, 11B, 12A, 12B, 13A, 13B and 14.
  • reference characters similar to those applied to blocks in Figs. 1A, 1B and 2 have been applied to representations of individual semi-conductor elements in order to clarify the relationship between the block diagrams and the detailed circuit diagrams.
  • the semiconductor elements are identified in the drawings as to type.
  • the various conductors are labeled so that interconnections among the various elements can be seen.
  • the elements 30A, 30B, 30C and 30D shown therein and associated circuitry comprise the tag memory 30.
  • the elements 96A and 96B shown therein and associated circuitry comprise the gate 96.
  • the elements 98A and 98B shown therein and associated circuitry comprise the comparator 98.
  • the elements 110A-110M shown therein, together with the associated circuitry shown therein comprise the cache memory control circuitry 110 shown in Fig. 2.
  • the programmable logic arrays 110E and 110M are programmed in the illustrated embodiment in accordance with the following equations:
  • CCAS CLDS # CUDS ;
  • TRMWE RAMWE * /FREEZE * INRANGE * /LATCOMP * CRW
  • LDRMWE RAMWE * /FREEZE * INRANGE * /LATCOMP * CRW
  • UDRMWE RAMWE * /FREEZE * INRANGE * /LATCOMP * CRW
  • MUCLK /(MMUENBL * DTACK2 * INRANGE * CRW
  • NMMU /MMUENBL ;
  • the elements 28A-28E shown therein, together with the associated circuitry shown therein comprise the data memory section 28 of the cache memory 26 of Fig. 1B.
  • the element 46A and a portion of the element 120 including lines BA8, BA9 and BA10, together with associated circuitry represent the receiver 46.
  • the element 90A, together with the remainder of the element 120, together with associated circuitry, represents the receiver 90.
  • the element 76A, together with associated circuitry, represents the flip flop 76.
  • the element 72A, together with associated circuitry, represents the receiver 72.
  • the elements 80A and 80B, together with associated circuitry, represent the comparator 80.
  • the element 20A together with associated circuitry represents the microprocessor unit 20.
  • the element 52A, together with associated circuitry represents the memory management unit 52.
  • the element 44A and a portion of the element 122 including lines LA9 and LA10, together with associated circuitry represent the buffer 44.
  • the element 66A, together with the remainder of the element 122, together with associated circuitry, represents the bus driver 66, except that the portion of element 66A associated with pin 17 represents a portion of the system control signals.
  • the element 124 represents additional general system control circuitry, falling within the category of miscellaneous bus signals, generally represented by line 130 in Fig. 2. Since these signals do not relate specifically to the present invention, they are not described in further detail.
  • the elements 58A and 58B, together with associated circuitry represent the gate 58.
  • the element 64 and a portion of element 126, together with associated circuitry represent the latch 64.
  • the remainder of the element 126, comprising the portion associated with pins 3, 4 and 7 is associated with the MPU 20 and is used to perform a bus transfer function in connection with lines A8, A9 and A10, due to the particular requirements of the illustrated embodiment of the invention.
  • Fig. 13A and 13B the elements 58A and 58B, together with associated circuitry, represent the gate 58.
  • the element 64 and a portion of element 126, together with associated circuitry represent the latch 64.
  • the remainder of the element 126, comprising the portion associated with pins 3, 4 and 7 is associated with the MPU 20 and is used to
  • the elements 136A-136F together with associated circuitry, represent the MMU mode control circuit 136.
  • inputs A and B are mode select lines controlled by the system software;
  • FC2 is a status line from the MPU 20;
  • BGACK is a line carrying a signal indicating when a direct memory access from an alternate bus master is present, which shuts down the MMU 52 and cache memory 26 reading.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A memory addressing system utilizing translated memory address information includes a microprocessor (20), a memory management unit (52) adapted to translate addresses, a cache memory (26) which stores data information and tag information, and a main memory (19). The cache memory (26) is addressed by a 23-bit word which includes a 12-bit index address portion and an 11-bit tag portion. The two most significant bits of the index address portion are stored in a flip-flop device (76). During the next memory cycle the cache memory (26) is addressed using the untranslated least significant 10 bits of the index address together with the two stored bits to provide rapid addressing. The two stored bits are compared in a comparator (80) with the corresponding two new bits after translation, and the cache memory (26) output is utilized for a further comparison if the comparison result is equal. Otherwise, the main memory (19) output is utilized as the system output. The further comparison in a comparator (98) between the new tag information and the tag information read out from the cache memory (26) further determines whether the cache memory (26) or the main memory (19) is utilized for the system output.

Description

MEMORY ADDRESSING SYSTEM
Technical Field
This invention relates to memory addressing systems of the kind including data processing means adapted to control memory access operations; cache memory means adapted to provide relatively fast access to data; and second memory means adapted to provide relatively slow, large-scale storage.
This invention also relates to a method for utilization of translated memory address information in a memory system having a fast cache memory and a slower second memory.
Background Art
U.S. Patent No. 4,386,402 discloses a memory addressing system including a processor, main memory means, buffer memory means and address translation means, in which the buffer memory includes a cache memory portion and an interrupt stack portion.
The known memory addressing system has the disadvantage of limited speed of operation resulting from the time taken to effect address translation.
Disclosure of the Invention
It is the object of the present invention to provide a memory addressing system and method wherein the disadvantageous effect of the slow speed of operation of an address translation unit on system operating speed is minimized.
Therefore, according to one aspect of the present invention, there is provided a memory addressing system of the kind specified, characterized by: translating means coupled to said data processing means and adapted to translate information including memory address information received from said data processing means into a different form for accessing said cache memory means and said second memory means; storage means adapted to store first memory address information translated by said translating means during a first cycle of operation of the system; means for accessing the first memory address information from said storage means and utilizing it during a second cycle of operating during which second memory address information is translated by said translating means; and comparison means adapted to compare said first and second memory address information, whereby said cache memory means is enabled to be accessed using said first memory address information if said first and second memory address information are found to be identical by said comparison, and said second memory means is caused to be accessed using said second memory address information if said first and second memory address information are found not to be identical by said comparison.
According to another aspect of the present invention, there is provided a method of the kind specified, characterized by the following steps: storing first memory address information during a first cycle of operation, translating information including second memory address information during a second cycle of operation; commencing a memory access operation of the cache memory during the second cycle of operation using said first memory address information while said second memory address information is being translated; comparing said first and second memory address information when the translation of said second memory address information has been completed; enabling an access operation of the cache memory if the first and second memory address information are identical; and performing an access operation of the second memory using said second memory address information if the first and second memory address information are not identical. Brief Description of the Drawings
One embodiment of the present invention will now be described by way of example, with reference to the accompanying drawings, in which:
Figs. 1A and 1B, taken together, constitute a block diagram of a system embodying the present invention.
Fig. 2 is a block diagram of cache memory control circuitry.
Fig. 3 is a diagram showing the format of an address for addressing the cache memory.
Fig. 4 is a map showing the manner in which Figs. 5A, 5B, 5C and 5D are combined to provide a detailed diagram of certain of the system circuitry.
Figs. 5A, 5B, 5C and 5D together constitute a detailed diagram of certain of the system. circuitry.
Figs. 6A and 6B; 7A and 7B; 8A and 8B; 9A and 9B; 10A, 10B, 10C and 10D; 11A and 11B; 12A and 12B; 13A and 13B and 14 constitute additional detailed diagrams of certain portions of the system circuitry.
Fig. 15 is a diagram showing waveforms of certain signals utilized in the system.
Fig. 16 is a map showing the manner in which Figs. 10A, 10B, 10C and 10D are combined to provide a detailed diagram of certain of the system circuitry.
Best Mode for Carrying Out the Invention
Figs. 1A and 1B, taken together, constitute a block diagram of a data processing system embodying the present invention. A microprocessor, or MPU 20, which may be of type 68010, manufactured by Motorola, Inc., is the main processor for the system in which the present invention is embodied, and controls terminals and other devices of the system. It outputs addresses to memory and receives instructions and data in response which enable it to exert the necessary controls. In the illustrated embodiment the microprocessor 20 is physically located on a circuit board, which is electrically coupled, by an interface or back plane represented by line 23, to a second circuit, also located on the circuit board, which includes components such as a cache memory 26 which comprises a data memory 28 and a tag, or identification, memory 30. Each data location in memory 28 is associated with a corresponding tag location in memory 30, which provides an identification or label for the associated data. The same address is used for accessing both the data memory 28 and the tag memory 30, and the tag information which is accessed is compared to tag information provided by the processor 20 to assure that the data from the memory 28 is actually the data sought. If the comparison is correct, the corresponding data is then used by the system for the intended purpose. If the comparison is not correct, then the newly provided tag information is substituted in the tag memory 30 for the previous tag information, and at the same time, new data corresponding to the newly provided tag information is read from the main memory 19 and provided over bus 29 to the data memory 28, into which it is written, to be available for possible future rapid access. It should be noted that in every cache memory access operation, the main memory 19 is also accessed in parallel, so that if the desired data is not in the cache memory 26, it can be obtained from the main memory 19 with minimum delay.
The memories 28 and 30 may be static random access memories, 4K by 16 in capacity. As is well- known, a cache memory is a relatively small memory in which may be stored data which is frequently used or otherwise needed, in order to provide more rapid access than can generally be obtained from a main memory, such as the memory 19, shown in block form in Fig. 1B, which is coupled to the rest of the circuitry at interface 23. However, cache memories are more expensive than main memories per storage location, and are therefore generally much smaller, so that their use is limited to essential data, which may be changed from time to time as needed during an operation. Frequently, information which has already been used in an operation is selectively placed in cache memory, since this same information may be needed again during the operation, and can be accessed more quickly from the cache memory than from the main memory. A bidirectional input-output bus 29 carrying data lines CD0-CD15 is coupled to the data memory 28.
Accessing of the cache memory 26 by the MPU 20 is accomplished by means of a 23-bit address, the format 32 of which is shown in Fig. 3, with bits 1-23 extending from the right to left in the figure. The format 32 includes a 12-bit index or address portion 34 which includes bits 11 and 12, comprising the most significant bits of the index portion, designated as prediction bits 36 and 38, and also includes an 11-bit tag portion 40, which is used for checking or identification purposes. The manner in which the address and tag portions are employed will be subsequently described.
Address bits 1-10 inclusive are transmitted from MPU 20 over lines LA1-LA10 in a bus 42 to a buffer 44, which may be of type F244, and which enables the signals on the lines, now designated BA1-BA10, to cross the back plane 23, via a bus 45, to a bus receiver 46, which also may be of type F244, on board 22.
The address lines extend from the receiver 46, and are now designated CA1-CA10, in a bus 48. These lines are joined by lines CA11 and CA12 in a bus 50, and are applied to both the data memory 28 and the tag memory 30 of the cache memory 26. The generation of the address bits on lines CA11 and CA12 will subsequently be described in detail. Due to the particular parameters and characteristics of the system, the address bits CA11 and CA12 cannot be included with the address bits CA1-CA10, but must be grouped for processing with the tag bits A13-A23. It will be noted that since the address bits CA11 and CA12 are the most significant bits of the index address, they are the bits least likely to change from one operation to the next, and therefore the bits whose content is easiest to predict on the basis of probability.
The portion A11-A23 of each cache memory address and tag word is generated by the MPU 20 in either logic address form, if a memory management unit (MMU) 52 on the board 22 is to be used, or in physical address form, if the MMU 52 is to be bypassed, as can be done under control of the MPU 20. The MMU 52 is a device which is extremely slow in operation compared to the operating speed of the remainder of the system, and which is used for the translation of logical addresses from the MPU 20 into physical addresses for accessing the cache memory 26 or the main memory 19. It is employed in connection with user programs, as distinguished from supervisory programs, in which the physical addresses in the system have not been established, and only relative, or logical, addresses have been provided. Use of logical addresses gives programmers greater latitude, and enables the use of a smaller main memory than would otherwise be required.
Let it first be assumed that the MMU 52 is to be bypassed and not used. In such case, the address lines A11-A23, which include the tag portion 40 and prediction bits 36, 38 are in physical, as distinguished from logical, form and are carried in buses 54 and 56 to a gate 58, which may be of type F244. The gate 58 is "turned on" only if the MMU 52 is not used, and said gate and bus 56 are used only in such a situation. The gate 58 then passes the lines A11-A23 to a bus 60 which is coupled to a bus 62 extending between a latch 64, which may be of type F373, and a bus driver 66, which may be of type F244. An MMU mode control signal MMUENBL/ is applied to the latch 64 on line 111.
From the bus driver 66, a first bus 68 carries address lines BA11 and BA12 across the interface 23 to a second bus 70 which carries said lines to a receiver 72, which may be of type F244. A branch 74 from the bus 70 carries the lines BA11 and BA12 to a flip-flop 76, which may be of type LS374. An MMU mode control signal MMUENBL/ is applied to the flip-flop 76 on line 111. A second branch 78 from the bus 70 carries the lines BA11 and BA12 to a comparator 80 which may be of type 74S86. A "correct guess" output 81 extends from the comparator 80, for a purpose to be subsequently described. A control 82 extends from the gate 58 across the interface 23 to the receiver 72 to enable the receiver to pass the address information carried on the lines BA11 and BA12 when the gate 58 is turned on, but to block said information when said gate 58 is turned off. Assuming that said gate 58 is turned on, the BA11 and BA12 information is carried on buses 84 and 86 to be combined with the information on lines CA1-CA10 to provide lines CA1-CA12 which are applied to the memories 28 and 30 to read out data and tag information, respectively.
Returning to bus driver 66, and assuming that gate 58 is turned on to pass a physical address to the bus driver 66 over buses 60 and 62, tag information is carried on the tag address lines BA13-BA23 from the bus driver 66 over a bus 88 across the interface 23 to a receiver 90, which may be of type F244. The tag information on these lines BA13-BA23 is carried from the receiver 90 over buses 92 and 94, respectively, to a gate 96, which may be of type F244, and to a tag comparator 98 , which may be of type F521. A "tag replace" input 100 is also applied to the gate 96, for a purpose to be subsequently described. An output bus 102 from the gate 96 extends to and interconnects with a bi-directional bus 104 which carries information on lines BA13-BA23 both to the tag memory 30 and to the tag comparator 98. A "HIT" output line 106 is connected to the tag comparator 98, for a purpose to be subsequently described.
The tag information which is applied to the tag comparator 98 from the receiver 90 is compared there to the tag information which has been read out of the tag memory 30 and carried to the comparator 98 by the bus 104. If the comparison is correct, a "HIT" signal appears on line 106, which generates a DTACK signal and causes the data read out from the data memory 28 to be utilized under control of the MPU 20. On the other hand, if the comparison of the tag information on lines BA13-BA23 does not correspond to the tag information read out of the tag memory 30, a contrary signal appears on the line 106. This causes generation of a "tag replace" signal on line 100 associated with gate 96, which in turn causes the gate to provide the tag information on lines BA13-3A23 over buses 102 and 104 to the tag memory 30, where it is stored in place of the tag information most recently read out. At the same time, data from the main memory corresponding to the tag information on lines BA13-BA23 is applied from the main memory 19 over bus 29 to the cache data memory 28 and is written therein under control of the UPDATE signal on line 134 in place of the data formerly stored in the address represented in lines CA1-CA12.
The cache memory 26 is thus used to provide data when the proper correspondence of the tag information is present, and is updated with new data from main memory when such correspondence of tag information does not exist.
It will be seen that the readout of data and the comparison of tag information cannot take place until a physical address for bits CA11 and CA12 (bits 36 and 38 of Fig. 3) is provided.
It will further be seen that in those operations in which MMU 52 is used, the relatively slow operation of the MMU 52 in translating the logical addresses of these address bits CA11 and CA12 to physical addresses would severely handicap the system from a speed of operation standpoint. It is therefore desirable to provide an alternative means of obtaining these addresses.
Now let it be assumed that the MMU 52 is to be used during an operation of the system embodying the present invention. In such case, the address All-A23 provided by the MPU 20 on the bus 54 is a logical address, rather than a physical address, and the gate 58 is turned off, or closed, blocking the bypass path previously described through said gate and the bus 60. The bus 54 carries the logical address A11-A23 to the MMU 52, where the address is translated into a physical address and applied to the latch 64 via a bus 108.
During each operation of the MMU 52, the address bits 11 and 12, after translation into physical address form, are applied to the flip flop 76 and stored therein, so that the previous address bits are retained when the next translating operation commences. During the first translation after startup, the bits stored in the flip flop 76 will be spurious.
During the time that the MMU 52 is engaged in translation, the flip flop 76 is accessed and the bits 11 and 12 stored therein are assumed to be the correct address bits for the current address and are carried over bus 86 to combine with address bits CA1-CA10 to form the total address for reading out the data memory 28 and the tag memory 30. This enables the tag information to be read out sooner than would be the case if the system waited for the MMU 52 to complete the translation of address bits All and A12. The tag information TA13-TA23 which has been read out can thus be transmitted immediately over the bus 104 to the comparator 98 for comparison with the translated tag information entering the comparator 98 via the bus 92. If the comparison is not correct, an appropriate signal is provided, and the cache memory 26 is not used, the data being taken from the main memory 19 for this cycle. In addition, the tag information in the cache memory is replaced, as previously described.
Concurrently, a determination is made as to whether the address bits A11 and A12 of the previous cycle, which were stored in the flip flop 76 and used as part of the current address, are in fact the same as the corresponding bits in the current cycle. This determination is made by the comparator 80 which receives the prior cycle address from the flip flop 76 via the bus 86, and which receives the translated current address from the driver 66 via the buses 68 and 78. If the comparison is correct, a "correct guess" signal is provided on the line 81 and the readout and use of the data from the cache data memory 28 proceeds if a "HIT" signal on line 106 is also present. If the comparison is not correct, an appropriate signal is provided, and the cache memory 26 is not used, the data being taken from the main memory 19 for this cycle.
Since the address bits All and A12 represent the most significant bits of the index address, the likelihood is high that they will remain the same from one cycle to the next, and considerable time in an operating cycle can be saved by operating on the assumption, or prediction, that these bits will not change, and proceeding immediately with read-out of the cache memory 26, without waiting for the relatively slow MMU 52 to complete the translation of these two bits from logical to physical address.
Shown in Fig. 2 is a block representation of the cache control circuitry 110 and an associated mode control circuit 136 utilized for controlling certain of the elements represented by blocks in Figs. 1A and 1B, together with representations of certain of the signal lines appearing in Figs. 1A and 1B, to which corresponding reference characters have been assigned in Fig. 2. Inputs to the cache memory control circuitry 110 are shown at the top (representing signals which are associated with the prediction circuitry) and on the left side (representing signals which are related to cache control) of the block 110, while outputs from the cache control memory circuitry are shown at the bottom (representing signals associated with the flip flop 76 and the receiver 72) and on the right side (representing signals which are related to cache control) of the block 110.
The MMU mode control circuitry 136 controls the mode of operation of the MMU 52. The MMU 52 is capable of operating in a first mode in which it is always active; in a second mode in which it is never active; and in a third mode in which it is active under control of the MPU 20 when a logical address is output from the MPU 20 to the MMU 52 for translation. The circuit 136 provides an MMU enable (MMUENBL/) signal on line 111 to the cache control circuit 110 and to the flip flop 76. The MMUENBL/ signal applied to the circuit 110 is essentially inverted and appears on output line 82, by which it is input to receiver 72 as signal NONMMU or NMMU. Shown in Fig. 15 are representations of relative commencement and duration of a number of significant events which take place during a cycle of operation of the system.
Waveform A represents the time of operation of the CPU 20 in generating the physical address signals LA1 to LA7, and the signals A8 to A23. Signals A8 to AlO are operated by the system to become physical address signals and are transmitted with signals LA1 to LA7 on bus 42. For purposes of general discussion, all of the lines LA1 to LA10 can be grouped as the lower 10 bits of a physical address.
Waveform B represents the active time of the index address signals CA1 to CA10 from the receiver 46 and the signals CA11 and CA12 from the flip flop 76.
Waveform C represents the time duration of the output of tag memory 30 in providing tag information over the bus 104 to the tag comparator 98.
Waveform D represents the time duration of the output of the MMU 52 in providing the physical address signals PAD11 to PAD23.
Waveform E shows the signal commencement of the tag comparison performed in comparator 98 to provide a HIT signal on line 106, and also represents the time duration of the tag comparison performed in comparator 80 of the address bits 36 and 38 to provide a "CORRECT GUESS" output on line 81.
Waveform F represents the commencement and duration of the "HIT" signal on line 106.
Waveform G represents the commencement and the duration of the "CORRECT GUESS" signal on line 81.
Waveform H represents the cache DTACK signal provided by the cache memory 26 to the MPU 20 to control the operation of the MPU 20 to cause it to sample the data provided from the memory and subsequently terminate the cycle of operation. Waveform J represents a clock signal on line 112 from the control circuitry which is shaped by a DTACK signal to operate the flip flop 76 to store the predictor bits out of the current address.
The block representations of Figs. 1A, 1B and 2, which were presented in general form to facilitate understanding of the invention, are shown in greater detail in Figs. 5A, 5B, 5C, 5D, 6A, 6B, 7A, 7B, 8A, 8B, 9A, 9B, 10A, 10B, 10C, 10D, 11A, 11B, 12A, 12B, 13A, 13B and 14. In these figures, reference characters similar to those applied to blocks in Figs. 1A, 1B and 2 have been applied to representations of individual semi-conductor elements in order to clarify the relationship between the block diagrams and the detailed circuit diagrams. The semiconductor elements are identified in the drawings as to type. In addition, the various conductors are labeled so that interconnections among the various elements can be seen. In Figs. 5A, 5B, 5C and 5D, the elements 30A, 30B, 30C and 30D shown therein and associated circuitry comprise the tag memory 30. The elements 96A and 96B shown therein and associated circuitry comprise the gate 96. The elements 98A and 98B shown therein and associated circuitry comprise the comparator 98. In Figs. 6A, 6B, 7A and 7B, the elements 110A-110M shown therein, together with the associated circuitry shown therein, comprise the cache memory control circuitry 110 shown in Fig. 2. The programmable logic arrays 110E and 110M are programmed in the illustrated embodiment in accordance with the following equations:
EQUATIONS FOR PROGRAMMABLE LOGIC ARRAY 11QE (U69):
/*- - INPUTS - - */ /**************/
PIN 1 = /BGACK ; PIN 2 = CRW ; PIN 3 = /BDEN ; PIN 4 = RAMWE ; PIN 5 = /CUDS ; PIN 6 = /CLDS ; PIN 7 = /LATCOMP ; PIN 8 = /PAM ; PIN 9 = FREEZE ; PIN 11 = /CACHEN ; PIN 13 = /INRANGE ;
/*- - OUTPUTS - - */ /***************/ PIN 12 = /XDIR ; PIN 14 = /UXEN ; PIN 15 = /LXEN ; PIN 16 = /UDRMWE ; PIN 17 = /LDRMWE ; PIN 18 = /TRMWE ; PIN 19 = /TABEN ;
/ *- - - DECLARATIONS AND INTERMEDIATE VARIABLES - - - */ /************************************************/
CCAS = CLDS # CUDS ;
/* - - - LOGiC EQUATiONS - - - */ /*** ******************* ***/
TRMWE = RAMWE * /FREEZE * INRANGE * /LATCOMP * CRW
* /BGACK * PAM * CCAS ;
LDRMWE = RAMWE * /FREEZE * INRANGE * /LATCOMP * CRW
* /BGACK * PAM * CCAS ;
+ RAMWE * INRANGE * LATCOMP * /CRW * CLDS * /FREEZE * PAM
+ RAMWE * INRANGE * LATCOMP * /CRW * CLDS * CACHEN * PAM;
UDRMWE = RAMWE * /FREEZE * INRANGE * /LATCOMP * CRW
* /BGACK * PAM * CCAS ;
+ RAMWE * INRANGE * LATCOMP * /CRW * CUDS * /FREEZE * PAM
+ RAMWE * INRANGE * LATCOMP * /CRW * CUDS * CACHEN * PAM; TABEN = BDEN * RAMWE * /FREEZE * INRANGE * /LATCOMP * CRW
* /BGACK * PAM * CCAS ; LXEN = BDEN * RAMWE * /FREEZE * INRANGE * /LATCOMP * CRW
* /BGACK * PAM * CCAS;
+ BDEN * RAMWE * INRANGE * LATCOMP * /CRW * CLDS *
/FREEZE * PAM
+ BDEN * RAMWE * INRANGE * LATCOMP * /CRW * CLDS *
CACHEN * PAM
+ CLDS * CACHEN * INRANGE * LATCOMP * CRW * PAM *
/BGACK ;
UXEN = BDEN * RAMWE * /FREEZE * INRANGE * /LATCOMP * CRW
* /BGACK * PAM * CCAS;
+ BDEN * RAMWE * INRANGE * LATCOMP * /CRW * CUDS *
/FREEZE * PAM
+ BDEN * RAMWE * INRANGE * LATCOMP * /CRW * CUDS *
CACHEN * PAM
+ CUDS * CACHEN * INRANGE * LATCOMP * CRW * PAM *
/BGACK ;
XDIR = LDRMWE
+ UDRMWE ;
EQUATIONS FOR PROGRAMMABLE LOGIC ARRAY 110M (U65):
^@- - INPUTS - -@ ^ ^@- - - - - - - - - - - -@^ PIN 1 = CA22 ; PIN 2 = CA23 ; PIN 3 = /LCOMP ; PIN 4 = /UCOMP ; PIN 5 = /CAS ; PIN 6 = /GOTH ; PIN 7 = /DEL1CAS ; PIN 8 = /MMUENBL ; PIN 9 = CRW ; PIN 11 = /CACHEN ; PIN 14 = FREEZE ; PIN 15 = /GOT12 ; PIN 16 = /BGACK ; ^@ - - OUTPUTS - - @ ^ ^ @ - - - - - - - - - - - - - @^ PIN 12 = MUCLK ; PIN 13 = /INRANGE ; PIN 17 = /DTACK2 ; PIN 18 = /DTACK ; PIN 19 = /NMMU ; ^@- - - DECLARATIONS AND INTERMEDIATE VARIABLES - - - @^ ^@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -@ ^
DELAY = DEL1CAS ;
^@ - - - LOGIC EQUATIONS - - - @^ ^ @- - - - - - - - - - - - - - - - - - - - - - - @ ^
DTACK.OE = INRANGE * LCOMP * UCOMP * CRW * CACHEN * /MMUENBL * BGACK ;
DTACK = DELAY ;
DTACK2.OE = INRANGE * LCOMP * UCOMP * CRW * CACHEN * MMUENBL * GOTH * GOT12 * /BGACK ;
DTACK2 = CAS
+ DELAY ; INRANGE = CA22 * CAS
+ CA23 * CAS ;
MUCLK = /(MMUENBL * DTACK2 * INRANGE * CRW
+ MMUENBL * DELAY * INRANGE */CRW) ;
NMMU = /MMUENBL ;
In Figs. 8A and 8B, the elements 28A-28E shown therein, together with the associated circuitry shown therein, comprise the data memory section 28 of the cache memory 26 of Fig. 1B. In Figs. 9A and 9B, the element 46A and a portion of the element 120 including lines BA8, BA9 and BA10, together with associated circuitry, represent the receiver 46. The element 90A, together with the remainder of the element 120, together with associated circuitry, represents the receiver 90. The element 76A, together with associated circuitry, represents the flip flop 76. The element 72A, together with associated circuitry, represents the receiver 72. The elements 80A and 80B, together with associated circuitry, represent the comparator 80. In Figs. 10A, 10B, 10C and 10D, the element 20A together with associated circuitry, represents the microprocessor unit 20. In Figs. 11A, 11B, 11C and 11D, the element 52A, together with associated circuitry, represents the memory management unit 52. In Figs. 12A, 12B, 12C and 12D, the element 44A and a portion of the element 122 including lines LA9 and LA10, together with associated circuitry, represent the buffer 44. The element 66A, together with the remainder of the element 122, together with associated circuitry, represents the bus driver 66, except that the portion of element 66A associated with pin 17 represents a portion of the system control signals. In addition, the element 124, together with associated circuitry, represents additional general system control circuitry, falling within the category of miscellaneous bus signals, generally represented by line 130 in Fig. 2. Since these signals do not relate specifically to the present invention, they are not described in further detail. In Figs. 13A and 13B, the elements 58A and 58B, together with associated circuitry, represent the gate 58. The element 64 and a portion of element 126, together with associated circuitry, represent the latch 64. The remainder of the element 126, comprising the portion associated with pins 3, 4 and 7 is associated with the MPU 20 and is used to perform a bus transfer function in connection with lines A8, A9 and A10, due to the particular requirements of the illustrated embodiment of the invention. In Fig. 14, the elements 136A-136F, together with associated circuitry, represent the MMU mode control circuit 136. In this circuit, it may be noted that inputs A and B are mode select lines controlled by the system software; FC2 is a status line from the MPU 20; and BGACK is a line carrying a signal indicating when a direct memory access from an alternate bus master is present, which shuts down the MMU 52 and cache memory 26 reading.
Figure imgf000052_0001
Figure imgf000053_0001

Claims

1. A memory addressing system, including data processing means (20) adapted to control memory access operations; cache memory means (26) adapted to provide relatively fast access to data; and second memory means (19) adapted to provide relatively slow, large-scale storage, characterized by: translating means (52) coupled to said data processing means (20) and adapted to translate information including memory address information received from said data processing means (20) into a different form for accessing said cache memory means (26) and said second memory means (19); storage means (76) adapted to store first memory address information translated by said translating means (52) during a first cycle of operation of the system; means for accessing the first memory address information from said storage means (76) and utilizing it during a second cycle of operating during which second memory address information is translated by said translating means (52); and comparison means (80) adapted to compare said first and second memory address information, whereby said cache memory means (26) is enabled to be accessed using said first memory address information if said first and second memory address information are found to be identical by said comparison, and said second memory means (19) is caused to be accessed using said second memory address information if said first and second memory address information are found not to be identical by said comparison.
2. A memory addressing system according to claim 1, characterized in that said cache memory means (26) includes a data memory section (28) and an identification section (30), and in that the information translated by said translating means (52) includes first identification information, corresponding to second identification information stored in said identification section (30), and associated with data stored in said data memory section (28), said system also including second comparison means (98) adapted to compare said first and second identification information for evaluating said associated data.
3. A memory addressing system according to claim 2, characterized by means for causing said second memory means (19) to be accessed if said first and second identification information are found not to be identical by said second comparison means (98).
4. A memory addressing system according to claim 2, characterized by means for causing said first identification information to be written into said identification section (30) of said cache memory means (26) in place of said second identification means.
5. A memory addressing system according to claim 1, characterized in that said data processing means is a microprocessor (20).
6. A memory addressing system according to claim 1, characterized in that said translating means is a memory management unit (52).
7. A memory addressing system according to claim 1, characterized in that said storage means (76) comprises at least one flip-flop.
8. A method for utilization of translated memory address information in a memory system having a fast cache memory (26) and a slower second memory (19), characterized by the following steps: storing first memory address information during a first cycle of operation; translating information including second memory address information during a second cycle of operation; commencing a memory access operation of the cache memory (26) during the second cycle of operation using said first memory address information while said second memory address information is being translated; comparing said first and second memory address information when the translation of said second memory address information has been completed; enabling an access operation of the cache memory (26) if the first and second memory address information are identical; and performing an access operation of the second memory (19) using said second memory address information if the first and second memory address information are not identical.
9. A method according to claim 8, characterized in that the translation of memory address information is from a logical address to a physical address.
10. A method according to claim 8, characterized in that the cache memory (26) includes a data memory section (28) and an identification memory section (30) and in that the translated information also includes identification information, said method also including the step of reading out identification information from the identification memory section (30) utilizing said first memory address; the step of comparing the identification information from the identification memory section (30) with the identification information from the translated memory address; and the step of determining whether the data information from the cache memory (26) may be used, based upon the result of the comparison of identification information.
11. A method according to claim 10, characterized by the step of replacing the identification information in the identification section (30) of the cache memory (26) when the comparison shows a lack of identity of such information.
12. A method according to claim 10, characterized by the step of performing an access operation of said second memory (19) rather than said cache memory (26) if the identification information from the identification memory section (30) is not identical to the identification information from the translated memory address.
PCT/US1988/001388 1987-05-14 1988-04-27 Memory addressing system WO1988009014A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US5079587A 1987-05-14 1987-05-14
US050,795 1987-05-14

Publications (2)

Publication Number Publication Date
WO1988009014A2 true WO1988009014A2 (en) 1988-11-17
WO1988009014A3 WO1988009014A3 (en) 1988-12-15

Family

ID=21967470

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1988/001388 WO1988009014A2 (en) 1987-05-14 1988-04-27 Memory addressing system

Country Status (2)

Country Link
EP (1) EP0314740A1 (en)
WO (1) WO1988009014A2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0395835A2 (en) * 1989-05-03 1990-11-07 Intergraph Corporation Improved cache accessing method and apparatus
WO1995028678A1 (en) * 1994-04-15 1995-10-26 Gmd-Forschungszentrum Informationstechnik Gmbh Cache memory device for data storage
WO1995029445A1 (en) * 1994-04-22 1995-11-02 Gmd - Forschungszentrum Informationstechnik Gmbh Cache storage device for data storage
WO2000062154A1 (en) * 1999-04-08 2000-10-19 Sun Microsystems, Inc. Apparatus and method for providing a cyclic buffer

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4400774A (en) * 1981-02-02 1983-08-23 Bell Telephone Laboratories, Incorporated Cache addressing arrangement in a computer system
EP0206050A2 (en) * 1985-06-28 1986-12-30 Hewlett-Packard Company Virtually addressed cache memory with physical tags

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4400774A (en) * 1981-02-02 1983-08-23 Bell Telephone Laboratories, Incorporated Cache addressing arrangement in a computer system
EP0206050A2 (en) * 1985-06-28 1986-12-30 Hewlett-Packard Company Virtually addressed cache memory with physical tags

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Patent Abstracts of Japan, volume 8, no. 188 (P-297)(1625), 29 August 1984; & JP-A-5975482 (FUJITSU K.K.) 28 April 1984 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0395835A2 (en) * 1989-05-03 1990-11-07 Intergraph Corporation Improved cache accessing method and apparatus
EP0395835A3 (en) * 1989-05-03 1991-11-27 Intergraph Corporation Improved cache accessing method and apparatus
WO1995028678A1 (en) * 1994-04-15 1995-10-26 Gmd-Forschungszentrum Informationstechnik Gmbh Cache memory device for data storage
US5913222A (en) * 1994-04-15 1999-06-15 Gmd-Forschungszentrum Informationstechnik Gmbh Color correction method in a virtually addressed and physically indexed cache memory in the event of no cache hit
WO1995029445A1 (en) * 1994-04-22 1995-11-02 Gmd - Forschungszentrum Informationstechnik Gmbh Cache storage device for data storage
US6009503A (en) * 1994-04-22 1999-12-28 International Business Machines Corporation Cache memory indexing using virtual, primary and secondary color indexes
WO2000062154A1 (en) * 1999-04-08 2000-10-19 Sun Microsystems, Inc. Apparatus and method for providing a cyclic buffer
US6807615B1 (en) 1999-04-08 2004-10-19 Sun Microsystems, Inc. Apparatus and method for providing a cyclic buffer using logical blocks

Also Published As

Publication number Publication date
EP0314740A1 (en) 1989-05-10
WO1988009014A3 (en) 1988-12-15

Similar Documents

Publication Publication Date Title
KR950007448B1 (en) Integrated circuit memory system
US5123101A (en) Multiple address space mapping technique for shared memory wherein a processor operates a fault handling routine upon a translator miss
KR880000299B1 (en) Cash apparatus
EP0009938B1 (en) Computing systems having high-speed cache memories
US5278961A (en) Physical address to logical address translator for memory management units
KR920005280B1 (en) High speed cache system
US5230045A (en) Multiple address space system including address translator for receiving virtual addresses from bus and providing real addresses on the bus
EP0090026B1 (en) Cache memory using a lowest priority replacement circuit
US5412787A (en) Two-level TLB having the second level TLB implemented in cache tag RAMs
US5257361A (en) Method and apparatus for controlling one or more hierarchical memories using a virtual storage scheme and physical to virtual address translation
US4471429A (en) Apparatus for cache clearing
US4803621A (en) Memory access system
US4158227A (en) Paged memory mapping with elimination of recurrent decoding
US5125085A (en) Least recently used replacement level generating apparatus and method
JPH02503722A (en) set associative memory
JPS5958700A (en) Memory protection judge method
EP0175620B1 (en) Access verification arrangement for digital data processing system which has demand-paged memory
US4513369A (en) Information processing system
US4943914A (en) Storage control system in which real address portion of TLB is on same chip as BAA
US20050027960A1 (en) Translation look-aside buffer sharing among logical partitions
WO1988009014A2 (en) Memory addressing system
US4424564A (en) Data processing system providing dual storage of reference bits
US4695947A (en) Virtual address system having fixed common bus cycles
US5960456A (en) Method and apparatus for providing a readable and writable cache tag memory
US6216198B1 (en) Cache memory accessible for continuous data without tag array indexing

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): DE FR GB

AK Designated states

Kind code of ref document: A3

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): DE FR GB

WWE Wipo information: entry into national phase

Ref document number: 1988904340

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1988904340

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1988904340

Country of ref document: EP