CA1123964A - Integrated multilevel storage hierarchy for a data processing system - Google Patents

Integrated multilevel storage hierarchy for a data processing system

Info

Publication number
CA1123964A
CA1123964A CA 335621 CA335621A CA1123964A CA 1123964 A CA1123964 A CA 1123964A CA 335621 CA335621 CA 335621 CA 335621 A CA335621 A CA 335621A CA 1123964 A CA1123964 A CA 1123964A
Authority
CA
Grant status
Grant
Patent type
Prior art keywords
data
cache
main memory
processing unit
information processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
CA 335621
Other languages
French (fr)
Inventor
Anthony J. Capozzi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Grant date

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0897Caches characterised by their organisation or structure with two or more cache hierarchy levels

Abstract

Abstract of the Disclosure two-level storage hierarchy for a data processing system is directly addressable by the main processor.
The two-level storage includes a high speed, small cache with a relatively slower, much larger main memory. The processor requests data from the cache and when the requested data is not resident in the cache, it is transferred from the main memory to the cache and then to the processor. The data transfer organization maximizes the system throughput and uses a single integrated control to accomplish data trans-fers.

Description

39~

INTEGRATED MULTILEVEL STORAGE HIEP~RCHY
FOR A ~ATA PROCESSING SYSTEM

Background of the~ Invention Field of the Invention This invention relates generally to data processing systems and more particularly to a data processing system having a multilevel memory including at least a first, small, high speed cache memory and one or more large, relatively slower main memories with an integrated control system therefor.
, Descriptlon of the Prior Ar-t Large data processing systems have processors with substantially increased operating speeds, which has resulted in the need for larger, readily accessible memory systems. In order to fully utilize the in-creased system operating speeds, it is necessary that the memory or some component thereof, operate at a speed reasonably close to the speed of the processiny unit or units. However, it is extremely difficult to reliably randomly access a block of data in a large memory space at high operating speeds in an economical manner.

A solution to the problem is to use a two or more level storage hierarchy including a small, fast cache E~978013 ~1~3~6~

memory store (hereinaf-ter re~erred to as a cache) and a large, relatively slower main memory or memories.
The system processor unit communica~es directly at essentially system spèed with the cache. If data requested by the processor unit is no-t in the cache, it must be found in the main memories and transferred to the cache, where it necessarily replaces an exist-ing block o data~

In order for a cache based system to be effective, there must be a highly ef~icient control store system to effect data transfer between the main memories and cache and to control-any data inputs from the system (channels, processing unit, etc.) to the cache or main memories. If the transfer of data from the main memories is not done e~ficiently, many of the advan tages of using a high speed cache will be ~lost.

Many of the compromises and trade-offs necessary to optimize a system are~not readily apparent. For example, U. S. Patent 3,896,419 describes a cache memory system wherein~a data request to the cache store is operated in parallel to the request for data information from the main memory store. A successful retrieval from the cache store aborts the retrieval from the main memory. This would appear to be a very 2S efficiemt approach, especially where a large number of datalrequests result in the need to extract data from the main memory. However, with the improved programming techniques which structure systems to require fewer data transfers from the main memory, such an approach can in fact diminish -the effective overall operatlng speed of a ;system. The reason for this is that even though a successful retrieval from cache aborts the retrieval from main memory, it ta~es some additional time for the memory to recycle and be réady to handle the next request. There~are, if a cache "hit" (data in cache) is immediately followed .

EN9~80?3 .

3~3ti~L

- by a cache "miss" (data not in cache) the system performance carl be degraded, since the system must wait for the main memory to return to a ready state (to quiesce) before the data acquisition and transfer s can begin.

Another disadvantage of such a system i5 that the various clocks (channel, main memory, cache and pro cessor) must be in sync using the same number of pulse words and the same cloc~ cycles. This, of course, presents design constraints and may result in some inefficiencies in one or more of the subsystems.

Objects and Summary of the Invention Accordingly, it is a principal object of the present invention to provide an improved memory store for a data processing system which overcomes the foregoing disadvantages Oe the prior art.

Another object of the present invention is to provide a multilevel memory store for a data processing system having a single storage control mechanism.

Yet another object of the present invention is to provide a multilevel memory storage system having improved operating speed and increased reliabili-ty.

Yet another and more specific object of the present invention is to provide a centralized control scheme for a multilevel memory store that provides data trans~er control to/from a processor/channel and the first and second levels o~ the stora~e hierarchy.
.
The foregoiny and other objects and advantages are accomplished according to one aspect of the invention by utilizing a two-level memory system having a single integrated control to accomplish dala -trans-fers within the system. The memory includes a .

~N978013 i. ` ~
3~

relati~ely small, high speed cache adapted to work with the processor at processor speeds and a rela-tively large, but slower main memory. In operation the processor requests data from the cache and if the requested data is in the cache (hit), it is trans-ferred via a data bus to the processor. If the requested data is no~ in the cache (miss), a ~ransfer process takes place to move the requested data from - the main memory to the cache, where it can then be re-requested by the processor. The main memor~ is activated for a data transfer to cache only after it has been determine~ that a cache miss has occurred.
More particularly, t~ere i5 provided:
ln a data processing system including an information processing unit;
a low speed, high capacity main memory;
a high speed, low capacity cache for temporarily storing data being used by said information processing unit;
at least one input/output channel for transferring information into or out of said information processing unit from devices other than said main memory or said cache;
a data transer and control means comprising-a storage address register connected to said in-formation processing unit for storing the address of data requested by said information processing unit;
a directory connected to said storage address regis-ter for storing addresses of data stored in said cache;
an error correction/bit generator connected to the output of said main memory for detecting memory erroxs and for generating correction bits;
an input/output data register connected to said information processing unit, said cache and said error correction/bit generator by a plurality of bidirectional data busses for transferring information into or out '~' 39~i4L
-4a-of said information processing unit from or to said main memor~ or said cache, means including said directory and said storage address register for interrogating a data request from said information processing unit to determine if it is a hit indicating that said data is in said cache or a miss indicating that said data is not in said cache;
transfer means connected to said main memory, said cache, said error correction/bit generator and said in-terrogating means, and operative in response to a missindication from said interrogating means to inititate the transfer of the requested data from said main memory to said cache; and a unitary control connected to said information processiny lnit, said main memory, said cache and said error correction~bit generator operative to control all data transfers between sald information processing unit, said cache and said main memory; said unitary control including said interrogating means, said transfer means, . 20 and a means connected to said main memory and said stor-age address register for maintaining the synchronization of each step in the transfer of data between said main memory and said error correction/bit gene.rator, between said error correction/bit generator and said input/output data register, between said input/output data register and said cache, and between said information processing unit ~nd said input/output data register, whereby said data transfers can take place simultaneously and in step-by-step synchronization.

- . --4b-Description of the Drawings FIG. 1 is a block diagram of a pr.ior art processor system having two levels of control Eor a bilevel memory store;

FIG. 2 is a block diagram illustrating a single level of control for a bilevel memory s~stem according to the present invention;

FIG. 3 is a block diagram of a data processor system having a bilevel memory store which illustrates the system data flow;

FIG. 4 is a block representation of the address partitioning of a storage address register used in a store controller according to the present invention;
;

FIG. 5 is a detailed block diagram illustrating the storage contro~ for a data processor system accordiny to the present invention, FIG. 6 is a diagram illustrating the sequences of the data transfer according to the present invention;

FIG. 7 is a timing diagram showing the sequences lZ3~64 involved in a cache "miss" and the subsequent trans-fer of data from main memory to the cache; and FIG. 8 is a flow chart illustratins the sequence of events as they relate to the timing diagram of FIG. 7.

I Description of the Preferred E'mbodiment The foregoing and other objects, features and advan-tages of the present invention will be apparent ~rom the following more particular description of the preferred embodiments of the invention taken in con-junction with the above-described accompanying drawings.
, ' ', For illustrative purposes the present invention will be described in the context of a bilevel memory system. However,- it will be readily apparent to those s]cilled in the art that the inventive concept is readily applicable to a multilevel memory system having more than two levels of memory or having parallel memories at one or more levels.

Referring first to FIG. 1, there is shown a typical prior art data processing having a two-level memory store. IThe system includes an information processing unit 11,with the channel hardware 13 included therein.
The twol,level store consists of a high speed cache 15, an error correction/bit generator 16 and a rela-tively lower speed main memory i7. The data flowbetween the IPU 11 and the cache 15~is controlled by a level I control 19, which controls the flow of data from the cache 15 through an input/output register 14 to -the IPU 11. If the desired data is not Eound in cache, the control is transferred -to a level II
control 21 which will locate the data in the main memory and transfer it via the error correction/bit generator 16 and the input/output data register 14 to the cache 15. In this system confisuration, it is 3~i4 - -necessary that there be an effective "handshake" mode of operation between the level I control 19 and the level II control 21. The level I control would then control the flow of data from the bus into the appro-priate cache location. This, in turn, affects theoverall.operating speed of the system and thereby limits its effectiveness.

Referring next to FIG. 2, there is shown in block diagram form the concept.of a single control 23 for the IPU and channel 25, the input/output register 26, the cache 27, the error correction/bit generator 28 and the main memory 29`. By utilizing a single con-trol 23, if the desired data is not found in the cache 27, the main memory 29 can be immediately accessed without a need to first transfer to a different control mechanism all of the necessary data address information. Also, in situations where it is desired to transfer data directly from the processing unit 25 to the main memory 29, by using a single control 23 this can be accomplished by way of a data path 31 which bypasses the cache 27. TKe specific operational characteristics of such a system will be shown in more detail in the description of the subsequent drawings.
, The blolk diagram of the system data flow is il.lus-strated,in FIG. 3. In this block diagram the essen-tial components involved in the system data flow are illustrated. Referring first to the processor unit 25, the essential elements are the channel buffer 35 in the channel 37, a byte shifter 39 and a local store 41. The channel buffer communicates with the byte shifter via a data llne 43 and the byte shif-ter communicates with the local store via a data line 45.
The cache system 27 includes the actual cache store 51 connected via a data line 53 to an input/output data register 55. The data register 55 is connected EN9780~3 3~i~
.
through a bidirectional data line 57 to the byte shifter 39 in the processor 25. The data flow mecha-- nism co~nunicates via a data line 59 to an error correction/bit genera-tor 61 which in turn connects via a data line 63 to the main memory 29. A di-rectory 65 is connected to the cache store 51 and a directory look-aside table (DL~T) 67 is resident in the cache sys-tem 27. Connected to -the data regis-ter 55 via data lines 69 and 71 are a retry buffer 73 and a swap buffer 75 whose function will be more fully described hereinafter. The cache system 27 also includes a key mechanism 77 and a storage address register (SAR) 79. In addition, there is a unilat-eral address bus 81 and a unilateral co~mand bus 83 running from the processor 25 to the cache system 27.
The controls 23 for these systems are connected via data lines 85f 87 and 89.

. - FIG. 4 -illustrates a 24 bit storage address register (SAR) 79 which is the main addressing facility of the s.torage control unit 27 of FIG. 3. ' The address set . . into the SAR is ~ated from the IPU on address bus 81 of FIG. 3 and as illustrated in FIG. 4, various . combinations of.bits from the SAR are used to acldress controls in the store controller ~7. By way of example~ the SAR will be illustrated with bits 2-12 defining the rea.l address of a 2K page in memory, bits 13l17 defining a cache page address, bits 18-20 deflne an 8-byte line within a cache page and bits 21-23 define a byte within a line. This address partitioning will become more apparent when seen iII
connection with the description of the store con-troller of FIG. 5.

In FIG. 5, the ~arious components of the store con-tro].ler and the relationships to the cache and main 35 memory are illustrated and the storage data ~low is indicated. The system includes a processor directory ~1~39~

- look aside table (DLAT) 102 and a channel direc~ory look-aside table 103 with the processor 3LAT having each entry containing a virtual and real address field, along with a fetch and sta-tus bit. The channel DLAT component contains the entries for channel virtual to real addressing capability. The system also includes a key stack 105 with multiple entry componen~s, each entry representing a given page in main store 107. The cache directory 109 contains a plurality of entries with multiple way associativity. For example, the cache directory might be four-way associative and, thereforej the cache 111 would contain four data areas. Each area of the cache 111 contains a plurality of cache pages and the cache is addressed by the storage address register. The system further includes a key check device 113, an input/output data reg}ster 115 and a swap buffer 117. There are two componen-ts of a real address reyister assembly 119 and 121, herelnafter referred to as RA1 and RA2. The controller addi-tionally comprises a compare circuit 123 and an error correction/bit generator 125. A main memory con-troller 127 and storage control registers 129 inter-face with the main memory.

For purposes of illustration, it will be assumed that the main memory has a 2 meg storage capability, the cache 111 is an 8-byte by 1K entry facility contain-iny the four data areas, with each area containing 32 cache pages or 256 lines. For such a system, -the directory 109 will contain 128 entries with four-way associativity and the key stack is a lK entry com-ponen~ wi-th each entry representing a 2K page in main storacJe. The input/output data register 115 will be described as an 8-byte clata transfer register, which both receives the processor data on a storage write and sends the data to the processor on a storage read operation. The input/output data register 115 also 96~

moves data between components in the storage controller.
.
The error correction/bit generator 125 provides the correct parlty information on the main memory/cache data path. The directory 109 and the directory look-aside tables 102, 103 receive addressiny via the storage address register, which, as previously described, is a 24 bit register used -to address, via bit grouping, the components of the storage control section. The addresses thereof may be virtual or real. RA1 and RA2 register components 119 and 121 receive addresses from the processor DLAT 102 and the directory 109, respectively, and in conjunction with;
the SAR, address the main memory 107 via the storage control registers 129.

The cache directory 109 is addressed by storage address register bits 13-17 and specifies a 64-byte cache page. Each entry contains an 11 bit real address and 3 status bits, one bit indicating a valid or invalid status, a modification bit indicating -the modify status and a ~ad entry bit indicating the physical condition of the cache entry. r.~ith the four-way associativity, four cache pages, belonging - to four different 2K pages, reside concurrently in the cache 111. The source of the real address is the real address fields from the processor DLAT 102 or the storage address register, via RA1 component 119.
The caché directory indicates if the desired page is in cache. If the real address is found to be in the directory, and its entry is valid, then the data is in cache. This is defined as a 'Ihit". If the real address is not found in the directory or if its en-try is valid, then the data is not in the cache and this is referred to as a data l'missl'. For a miss, it is necessary to access the main memory to brin~ the desired data thereErom to -the cache.

39~4 The cache l 11 is an 8K byte facility divided into four sections, deEining the Eour-way associativity with the directory 109. Each section of the cache contains 32 entries o~ 64 bytes each. The cache receives data from the I/O data register 115 and from the IPU data bus 135. The output from the cache goes to the I/O data register 115. All four da-ta areas of the cache will be addressed simultaneously by the storage address register with the SAR address bit field 1-3-17 addressing the page component and -the SAR
bit field 18-20 addressing the 8-byte line component.
A final selection is made by the associativity class from the directory 109 that the directory hit occurred on.

In operation, 64-byte pages are Ioaded into the cache 111 from the main memor~ 107 only on those commands in which a directory "miss" trap may occur, with the data being transmitted via the I/O data register 115.

The swap buffer 117 stores one cache pa~e at a time and is used to buffer the outgoing page from cache in an outpage operation and stores syndrome bits cJen-erated during a fetch from the main memory 107. The syndrome bits are used to identify any corrected data corrected by the error correction/bit generator 125 on any lead from storage. A retry buffer (not shown) - can be ~sed to store those double words read from cache in a write operation prior to modification in which the cache is modified.

The key stack 105 has a plurality of entries, with each entry representiny a 2K page in storage. Each entry contains a plural bit storage protection key, a fetch protection bit and a reference bit and change bit for the identified page. The input for the key stack array is from the I/0 data bus. The output from the key stack 105 ls checked with the key bus EN~78013 ' 3~

137 or from the two ke~ ~ields from the processor - DLAT 102. The key stack also receives an input from the r.eal address assembly component 119 using bits

2-12 thereof.

The main memory which has a storage capacity typi-cally on the order o~ megabytes, receives and sends data via the error correction/bit generator 125. The data ls selected from the main memory based upon inputs from the memory controller 1~7, from the real 10 address assembly units 119, 121 and from the storage address register. Data to and from the main memory is transferred 8 bytes at a time on an 8-byte bi-directional data bus connected between the error correction/bit generator and the main memory. In the 15 configuration, according to the presen-t invention, inputs from the channel will always be written direc-tly.into the main memory and will invalidate an old cache page having the same address, if it is con-tained in cache at the time the channel writes to 20 . memory. Conversely, the processor will always writeinto cache, which will then trans~er data to the main memory if appropriate. Accordingly, the main memory clock.and th`e ~hannel clock will generally run in sync, for example, using four pulses in a 150 nano-second c~cle time. Also, the cache clock and theprocessor clock will run together and may be on either a 4, 6 or 8 pulse clock cycle.

As mentioned previously, the input/output data register 115 is an 8-byte register used to move data to and from the processor/channel and the store. The data register also moves data between components in the store contoller as illustrated in FIG. 5. The outpu-t of the data register may go to the cache input, -to the processor data hus, to the swap buf~er (or retry buffer) and to the error correction/bit generator.
The data register may be set from the cache output, E~978013

3~

from the processor data bus, from the error correction/
bit generator, from the key array, frcm the retry buffer and from the swap buffer.

The real address assembler is comprised of ~
and RA2 121. RA1 is set from the storage address register or Erom the real address fields of the directory look-aside -tables 102, 103. R~2 is set from the directory 109 real address entry that com-pares equal. With a DLAT "hit" and a directory - 10 "miss", the real address from RA1 is gated to the main memory 107. At the same -time, SAR bits 13-17 are also gated to the main memory, with the address bits from RA1 addressing a selected 2K page and with bits 13-17 addressing the selected 64 bytes (cache page). The output of the real address assembly may also be gated to the input to the directory for loading the real address, to the key stack ror reading or storing the key, or -to -the retry/swap buffer array for storing read addresses.

T~e manner in which the data transfers within the system take place is illustrated by the control set out in FIG. 6. The basic control mechanism is a bank of ring counters. The number of ring counters used varies depending upon the particular application.
For transferring 64-byte cache pages, a three or four ring counter is used. Illustrated in FIG. 6 is a four ring counter with the rings defined as A, B, C
and D. The data bus width, as described above, ls 8 bytes and, therefore, eight sequential data -transfers must take place to transfer one cache page. Each of the 8 bytes transferred is identified with one of the 8 ring positions. That is, each ring position iclen--tifies a double word of 8 bytes witll position 0 identifying the first double word and position 7 identifying the last double word.

~1~39~;4 Each of the rings in the ban]c of ring counters iden-tifies where in the system data flow 2 given word is on a ~iven cycle. Three of ~he ring counters are used to transfér data from the main memory to the cache and four are used to transfer data from main memor~ to a channel of the IPU.
.
Assume, for example, that 64 bytes of data are to be transferred from the main memory to the channel in the IP~. Then the sequence would occur as follows:
for the first cycle, ring A identifies the c~cle that each of the 8 bytes appears on the memory bus 63 of FIG. 3; ring B identifies the cycle in which each of the 8 bytes passes through the error correctionfbit generato.r 61; and ring C identifies the cycle in which each of the 8 bytes is stored into in/out data register and thus can be read into the cache. Ring D
indicates that the data appears on the bidirectional data bus 57 and is available to the processor or the : channel.

If a fetch operation ~rom the main memory 29 to ~he processor 25 or channel 37 ls re.quested, the sequence is as follows: During the second cycle, position 0 of ringlB (FIG. 6) is active, meaning that the double word zero is now passing throuyh the error correction/
bit generator and, position 1 of rinc; A is on, indi-cating -that the main memory data bus now contains double word one. During the third cycle, double word zero is passing through the in/out data register 55 of FIG. 3 correspondiny to position 0 o~ ring C.
Double word one is in the error correc-tion/bit gen-erator 61 and double word two appears on the main memory data bus 63. During the fourth cycle, double word zero is present on the bidirectional data bus 57, double word one is passiny throuyh the data ~low 55, double two is passing through the error correction/
blt generator 61 and double word three is present on 1~2~6'~

the main memory data bus 63. This process continues until the desired amount of data is transferred, for example, through enough cycles for a full 64-byte page.

For transfers between the cache and main memory only three ring counters are needed since the data when resident in the register 55 is immediately available to be read into the cache. The fourtl1 ring counter is needed for transfers between the IPU/channels and main memory since there is an additional cycle.re.quired to transfer the data to the 8-byte bidirectional data bus 57 between the IPU/channels 25 and the data . register 55.

On store operations, the data is transferred in the opposite direction and, therefore, the rings A, B, C and D define data transfer in the reverse order.
This is true whether data is being transferred from the channel/processor to the main memory 29 or whether data is being transferred from the cache 51 through the swap buffer to the main memory 29..

To better understand the operation of the system, when the processor requests data from the cache and a cache miss occurs, the sequence of events that occurs to translfer data from.the main memory to the cache is 25 illustrated in FIG. 8 with the timiny sequence shown in the timing chart of FIG. 7. The ring counters are.
as illustrated In FIG. 6 with the location and time as illustrated in FIG. 5.. .In the illustration of the flow diagram of FIG. 8, the horizontally positioned boxes are understood to be operatiny in parallel, whereas the vertically positioned boxes operate sequentially. Prior to the initiation of the trans-fer of data from the main memory to the cache, the main memory/channel clocks must be brought int.o synchronLzation with the cache and IPU clocks. In E~L~97801 3 3~3t~

FIG. 7, this is shown as haviny already occurred, usiny any of the synchronization processes well known in the art. In the first step as illustrated in FIG. 8, the storage word issues the address and searches the directory look-aside table. If a search of the directory 109 of FIG. 5 indicates a miss, then the first read cycle oF FIG. 7 is followed by a trap sequence on the next -two memory/channel cycles and then by a return on the followin~ two cycles. .~t tlle initiation of the trap, a directory miss pulse is brought up, so that following the return cycle of the IPU clock, the processor effectively is "put to sleep" as illustrated by the "X's" on the timing line ~or the IPU clock in FIG. 7. After a check confirms that the miss has in fact occurred, the command and addresses are presented to the main memory 107 and it starts the control ring sequence.

As illustrated, when the control ring counter is initiated, the trap and return cycle commands are also initiated and the IPU clock is stopped. Follow-ing the presenting of the cornmand and address to the memory, the memory access is initiated and an inde-terminate access delay occurs while locating the data address`in the memory. While this is shown as one access delay cycle in FIG. 7, it can actually require more thc~n one cycle. Once the data is located in memory ~on cycle X), the first double word is requested and the ring counter A begins counting with the flrst double word in position 0. On the next cycle, X ~
the second double word is requested, the irst double word has moved to the error correction/bit yenerator 125 (riny counter B) and the ring counters A and B
are advanced to indicate now that riny counter ~ is in position one and B is in position 0.

On the third transfer cycle, X + 2, the third double word is requested, the second double word is in the EN~78013 3~

error correction~bit generator 125 and -the firs-t double word is in the input/output regis-ter 115. At the end of the cycle X + 2, the cache write pulse is generated and will read the first double word Erom the input/output data register 115 to the cache 111.
On cycle X + 3, the fourth double word is re~ues-ted from memory, the third double word is in the error correction/bit genera-tor 125, the second double ~ord is in the input/output re~ister 115 and the first double word is resident in the cache 1l1. ~t this point, the pipelining effect of the data transfer is full and continues until all data is transferred.
Also towards the end of the X + 3 cycle, the second double word is read from the input/output data register 115 into the cache 111. This sequence oÉ
events continues up through the X + 9th cycle at which time the last double word has been stored in the input/output register 115 and read therefrom on a cache write pulse to the cache 111. A-t the end of the X + 9th cycle, the directory miss pulse is dropped.
At this time, an IPU complete pulse is generated which turns on the IPU clock and causes the issuing of a read pulse. At this juncture, there will be the issuance from the processor to the cache of the address~for the requested data, the directory look-aside tabIe will be searched and a directory hit will occur. I~f-ter the checking for the accuracy of the directo~v hit, the data will be -transferred on the subsequent cycles from the cache to the processor.

From the foregoing, it is readily apparent that applicant has provided a multilevel memory storage concep-t for a data processing system having a single storage control mechanism which provides improved operating speed and increased reliability. Using the concept ~or a two or more level memory system, the centralized control scheme provides data transfer control to/from a processor/channel and the first and a;~

subsequent levels of the storage h.ierarchy ln a manner which improves the throughput of the system operation. It will be readily apparent to those skilled in the art tha-t various modifications and changes can be made to the foregoiny without de-parting from the spirit or scope of the invention.
Accordingly, it is intended that the invention not be limited -to the specifics oE the foregoing description of the preferred embodiment, bu-t rather is to embrace =be fu11 scope of thr appended claims.

" ' '' "' ' , "
`

; EN978013

Claims

The embodiments of the invention in which an exclusive property or privilege is claimed are defined as follows:
1. In a data processing system including an information processing unit;
a low speed, high capacity main memory;
a high speed, low capacity cache for temporarily storing data being used by said information processing unit;
at least one input/output channel for transferring information into or out of said information processing unit from devices o her than said main memory or said cache;
a data transfer and control means comprising:
a storage address register connected to said in-formation processing unit for storing the address of data requested by said information processing unit;
a directory connected to said storage address regis-ter for storing addresses of data stored in said cache;
an error correction/bit generator connected to the output of said main memory for detecting memory errors and for generating correction bits;
an input/output data register connected to said information processing unit, said cache and said error correction/bit generator by a plurality of bidirectional data busses for transferring information into or out of said information processing unit from or to said main memory or said cache, means including said directory and said storage address register for interrogating a data request from said information processing unit to determine if it is a hit indicating that said data is in said cache or a miss indicating that said data is not in said cache;
1. (Continued) transfer means connected to said main memory, said cache, said error correction/bit generator and said in-terrogating means, and operative in response to a miss indication from said interrogating means to inititate the transfer of the requested data from said main memory to said cache; and a unitary control connected to said information processing unit, said main memory, said cache and said error correction/bit generator operative to control all data transfers between said information processing unit, said cache and said main memory; said unitary control including said interrogating means, said transfer means, and a means connected to said main memory and said stor-age address register for maintaining the synchronization of each step in the transfer of data between said main memory and said error correction/bit generator, between said error correction/bit generator and said input/output data register, between said input/output data register and said cache, and between said information processing unit and said input/output data register, whereby said data transfers can take place simultaneously and in step-by-step synchronization.
CA 335621 1978-10-26 1979-09-14 Integrated multilevel storage hierarchy for a data processing system Expired CA1123964A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US95503178 true 1978-10-26 1978-10-26
US955,031 1978-10-26

Publications (1)

Publication Number Publication Date
CA1123964A true CA1123964A (en) 1982-05-18

Family

ID=25496275

Family Applications (1)

Application Number Title Priority Date Filing Date
CA 335621 Expired CA1123964A (en) 1978-10-26 1979-09-14 Integrated multilevel storage hierarchy for a data processing system

Country Status (4)

Country Link
EP (1) EP0010625B1 (en)
JP (1) JPS5818710B2 (en)
CA (1) CA1123964A (en)
DE (1) DE2965288D1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA1187198A (en) * 1981-06-15 1985-05-14 Takashi Chiba System for controlling access to channel buffers
JPS6047624B2 (en) * 1982-06-30 1985-10-22 Fujitsu Ltd
CA1299767C (en) * 1987-02-18 1992-04-28 Toshikatsu Mori Cache memory control system

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB976499A (en) * 1960-03-16 1964-11-25 Nat Res Dev Improvements in or relating to electronic digital computing machines
US3275991A (en) * 1962-12-03 1966-09-27 Bunker Ramo Memory system
US3693165A (en) * 1971-06-29 1972-09-19 Ibm Parallel addressing of a storage hierarchy in a data processing system using virtual addressing
US3786427A (en) * 1971-06-29 1974-01-15 Ibm Dynamic address translation reversed
US3735360A (en) * 1971-08-25 1973-05-22 Ibm High speed buffer operation in a multi-processing system
US3761883A (en) * 1972-01-20 1973-09-25 Ibm Storage protect key array for a multiprocessing system
US3829840A (en) * 1972-07-24 1974-08-13 Ibm Virtual memory system
US3806888A (en) * 1972-12-04 1974-04-23 Ibm Hierarchial memory system
JPS504530A (en) * 1973-05-16 1975-01-17
US3911401A (en) * 1973-06-04 1975-10-07 Ibm Hierarchial memory/storage system for an electronic computer
US3839706A (en) * 1973-07-02 1974-10-01 Ibm Input/output channel relocation storage protect mechanism
US3883854A (en) * 1973-11-30 1975-05-13 Ibm Interleaved memory control signal and data handling apparatus using pipelining techniques
US3896419A (en) * 1974-01-17 1975-07-22 Honeywell Inf Systems Cache memory store in a processor of a data processing system
US3938097A (en) * 1974-04-01 1976-02-10 Xerox Corporation Memory and buffer arrangement for digital computers
US4020466A (en) * 1974-07-05 1977-04-26 Ibm Corporation Memory hierarchy system with journaling and copy back
US3967247A (en) * 1974-11-11 1976-06-29 Sperry Rand Corporation Storage interface unit
US4075686A (en) * 1976-12-30 1978-02-21 Honeywell Information Systems Inc. Input/output cache system including bypass capability

Also Published As

Publication number Publication date Type
EP0010625A1 (en) 1980-05-14 application
JPS5818710B2 (en) 1983-04-14 grant
DE2965288D1 (en) 1983-06-01 grant
CA1123964A1 (en) grant
JPS5558880A (en) 1980-05-01 application
EP0010625B1 (en) 1983-04-27 grant

Similar Documents

Publication Publication Date Title
US3581291A (en) Memory control system in multiprocessing system
US3618040A (en) Memory control apparatus in multiprocessor system
US3670307A (en) Interstorage transfer mechanism
US4785398A (en) Virtual cache system using page level number generating CAM to access other memories for processing requests relating to a page
US5694568A (en) Prefetch system applicable to complex memory access schemes
US5586295A (en) Combination prefetch buffer and instruction cache
US4924375A (en) Page interleaved memory access
US5051889A (en) Page interleaved memory access
US5247649A (en) Multi-processor system having a multi-port cache memory
US5426750A (en) Translation lookaside buffer apparatus and method with input/output entries, page table entries and page table pointers
US6205521B1 (en) Inclusion map for accelerated cache flush
US4768148A (en) Read in process memory apparatus
US5325499A (en) Computer system including a write protection circuit for preventing illegal write operations and a write poster with improved memory
US6981104B2 (en) Method for conducting checkpointing within a writeback cache
US4646233A (en) Physical cache unit for computer
US4527238A (en) Cache with independent addressable data and directory arrays
US5530941A (en) System and method for prefetching data from a main computer memory into a cache memory
US4985829A (en) Cache hierarchy design for use in a memory management unit
US5073851A (en) Apparatus and method for improved caching in a computer system
US5379394A (en) Microprocessor with two groups of internal buses
US4500954A (en) Cache bypass system with post-block transfer directory examinations for updating cache and/or maintaining bypass
US6321296B1 (en) SDRAM L3 cache using speculative loads with command aborts to lower latency
US5210845A (en) Controller for two-way set associative cache
US5594862A (en) XOR controller for a storage subsystem
US4685082A (en) Simplified cache with automatic update

Legal Events

Date Code Title Description
MKEX Expiry