WO1991014225A1 - Apparatus and method for providing a stall cache - Google Patents

Apparatus and method for providing a stall cache Download PDF

Info

Publication number
WO1991014225A1
WO1991014225A1 PCT/US1991/001292 US9101292W WO9114225A1 WO 1991014225 A1 WO1991014225 A1 WO 1991014225A1 US 9101292 W US9101292 W US 9101292W WO 9114225 A1 WO9114225 A1 WO 9114225A1
Authority
WO
WIPO (PCT)
Prior art keywords
instruction
instructions
data
processor
bus
Prior art date
Application number
PCT/US1991/001292
Other languages
French (fr)
Inventor
Edward H. Frank
Masood Namjoo
Original Assignee
Sun Microsystems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Microsystems, Inc. filed Critical Sun Microsystems, Inc.
Priority to DE69130233T priority Critical patent/DE69130233T2/en
Priority to KR1019920702194A priority patent/KR100210205B1/en
Priority to EP91906378A priority patent/EP0592404B1/en
Priority to JP3506054A priority patent/JP2879607B2/en
Publication of WO1991014225A1 publication Critical patent/WO1991014225A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3802Instruction prefetching
    • G06F9/3814Implementation provisions of instruction buffers, e.g. prefetch buffer; banks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3802Instruction prefetching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3802Instruction prefetching
    • G06F9/3808Instruction prefetching for instruction reuse, e.g. trace cache, branch target cache
    • G06F9/381Loop buffering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3824Operand accessing

Definitions

  • This invention relates to computer systems and, more particularly, to methods and apparatus for increasing the speed of operation of computer processors capable of providing pipelined operation.
  • RISC reduced instruction set computer
  • the central processing unit of the typical RISC computer is very simple. In general, it fetches an instruction every clock cycle. In its simplest embodiment, all instructions except for load and store instructions act upon internal registers within the central processing unit.
  • a load instruction is used to fetch data from external memory and place it in an internal register
  • a store instruction is used to take the contents of an internal register and place it in external memory.
  • Pipelining One of the techniques utilized in RISC and other computers for obtaining higher speeds of operation is called pipelining.
  • Processors utilized in computer systems to provide pipelined operations normally cycle through fetch, decode, execute, and write back steps of operation in executing each instruction.
  • the individual instructions are overlapped so that an instruction executes once each clock cycle of the system.
  • a load or a store operation requires that both the instruction and the data be moved on the bus. Consequently, at least two cycles are normally required.
  • some RISC chips provide an on-chip instruction cache so that an internal source of instructions is available and both data and instructions need not be on the bus at the same time.
  • an instruction cache usually requires a good deal of chip space and may not be feasible without radical overhaul of the architecture of a chip.
  • an object of the present invention to provide some means for eliminating the time cost of load and store instructions in pipelined systems, especially RISC systems.
  • Another object of the invention is to provide a means for eliminating the time cost of instructions which cause a stall due to bus contention in pipelined operations of processors.
  • Figure 1 is a timing diagram illustrating a typical pipelined computer system executing a number of instructions that require a single system clock cycle to execute.
  • Figure 2 is a timing diagram illustrating a typical pipelined computer system executing a number of instructions that require a single system clock cycle to execute and one load instruction that requires two system clock cycles to execute.
  • Figure 3 is a timing diagram illustrating a typical pipelined computer system executing a number of instructions that require a single system clock cycle to execute and one store instruction that requires three system clock cycles to execute.
  • Figure 4 is a timing diagram illustrating a pipelined computer system in accordance with the invention executing a number of instructions that require a single system clock cycle to execute and one load instruction.
  • Figure 5 is a timing diagram illustrating a pipelined computer system constructed in accordance with the invention executing a number of instructions that require a single system clock cycle to execute and one store instruction
  • Figure 6 is a block diagram illustrating a typical RISC computer system of the prior art.
  • FIG. 7 is a block diagram illustrating a typical RISC computer system designed in accordance with the present invention. NOTATION AND NOMENCLATURE
  • the manipulations performed are often referred to in terms, such as adding or comparing, which are commonly associated with mental operations performed by a human operator. No such capability of a human operator is necessary or desirable in most cases in any of the operations described herein which form part of the present invention; the operations are machine operations.
  • Useful machines for performing the operations of the present invention include general purpose digital computers or other similar devices. In all cases the distinction between the method operations in operating a computer and the method of computation itself should be borne in mind.
  • the present invention relates to apparatus and to method steps for operating a computer in processing electrical or other (e.g. mechanical, chemical) physical signals to generate other desired physical signals.
  • Figure 1 illustrates a timing diagram showing the operation of a typical pipelined RISC computer processor while processing single cycle instructions.
  • Single cycle instructions are those that when utilized in the system allow an instruction to be executed during each system clock cycle.
  • the typical steps of executing an instruction are to fetch the instruction, to decode the instruction for its meaning and addresses involved, to execute the instruction, and to write the instruction back to the registers of the processor.
  • Figure 1 shows a number of clock cycles along the top of the diagram beginning at the clock cycle designated as time 0 and continuing through the clock cycle designated as time 6.
  • Four instructions io through 13 are illustrated in the diagram. Each instruction illustrated includes a fetch, a decode, an execute, and a write back stage. The assertion of the addresses of the instructions on the address bus and the assertion of the instructions on the data bus are illustrated below the steps of the instructions.
  • Figure 2 illustrates a timing diagram showing the operation of a typical pipelined RISC computer processor while processing instructions which in addition to single cycle instructions include a load instruction.
  • a number of system clock cycles are illustrated along the top of the diagram commencing with cycle time 0 and continuing through clock cycle time 7.
  • Four instructions io through 13 are illustrated along with a single data fetch required by the load instruction which is for this description considered to be instruction io-
  • the assertion of the addresses of the instructions and the load address on the address bus and the assertion of the instructions and data on the data bus are illustrated below the steps of the instructions.
  • the decoding of the instruction 10 at time 1 requires that the processor fetch data from memory during clock cycle time 3.
  • the address from which the data is to be loaded is provided on the address bus during cycle time 2, and the data to be loaded is placed on the data bus during cycle time 3. Since both data and an instruction cannot be on the data bus at the same time, the system is required to stall the pipeline for one cycle while the data is placed on the data bus in the data fetch step shown as DF in the Figure 2. Once the data has been fetched and placed in the pipeline, the next instruction may be fetched. However, the execution of instructions has been delayed by one clock cycle so that instruction 13 is not complete until cycle time 7.
  • the computer illustrated therein is a RISC computer 10 having a central processing unit 12 and external memory 14 joined to the central processing unit 12 by a data bus 15 and an address bus 17 from a cache memory 13.
  • the central processing unit 12 includes an arithmetic and logic unit 19, a register file 20, an instruction decode/fetch instruction data unit 22, and a bus interface 23.
  • addresses are transferred between memory and the central processing unit 12 on the address bus 15 while both data and instructions are transferred between the unit 12 and the memory 14 on the data bus 17.
  • Instructions appear on the data bus 17 and are transferred to the instruction decode unit 22 while data appearing on the data bus 17 are transferred by a multiplexor 25 to the register file 20. Since instructions must use the same bus 17 as does the data, when an instruction requires the access of data from the memory 14, as do load and store instructions, only one of these operations can use the bus at once. Thus, it is apparent that the delay of the processing in the pipeline is due to the structure of the processor which provides only a single bus for both data and instructions.
  • Figure 3 illustrates a timing diagram showing the operation of a typical pipelined RISC computer processor while processing instructions which in addition to single cycle instructions include a store instruction.
  • the computer may, for example, be one of the SPARC based computers manufactured by Sun Microsystems, Mountain View, California. Such computers normally require two clock cycles to transfer data during a store operation It is not, however, necessary either to the understanding of this invention or to its implementation that the system require two bus cycles for a store operation.
  • a number of system clock cycles are illustrated along the top of the diagram commencing with clock cycle time 0 and continuing through clock cycle time 8.
  • the decoding of the instruction io at time 1 provides the store addresses which require that the processor transfer data to memory during two different cycles. Since both data and instructions cannot be on the data bus at the same time, the system is required to stall the pipeline for two clock cycles while the data is placed on the data bus in the data fetch steps shown as DF in the Figure 3. Once the data has been fetched and placed in the pipeline, the next instruction may be fetched. However, the execution of instructions has been delayed by two clock cycle so that instruction 13 is not complete until clock cycle time 8.
  • the two cycle minimum time has been shortened in the prior art by providing an on-chip general purpose instruction cache so that an internal source of instructions is available and both data and instructions need not be on the bus at the same time.
  • an instruction cache usually requires a good deal of chip space and may not be feasible without radical overhaul of the architecture of the chip, a procedure which may be very expensive.
  • the time has also been shortened by providing both an instruction and a data bus for the system so that instructions and data may be transferred in parallel. This also requires a good deal of chip architecture revision and significant expense.
  • the present invention overcomes this problem of the prior art in a different manner.
  • the present invention provides a very small on-chip restricted-use cache that may be termed a stall cache for the purposes of this disclosure.
  • This cache may have only a very few lines, for example, in a preferred embodiment the stall cache may be a fully-associative cache having a line size of one instruction so that no unnecessary instructions need be stored in the cache.
  • a cache of this type having eight lines would require a very small amount of die space on the chip. Consequently, it might be placed on presently designed chips without requiring any substantial architecture modification.
  • the cache might also be implemented in other forms having a larger line size and therefore storing more than load and store instructions.
  • An N-way set associative cache might also be utilized.
  • the cache of this invention is somewhat like a target-of-a-branch cache which is also a restricted type of cache.
  • a target-of-a-branch cache is utilized to hold instructions that are the targets of branches while the present invention utilizes a restricted cache to hold instructions involved in bus contentions.
  • the cache is arranged to hold instructions just as in a full sized instruction cache except that the instructions held are only those instructions that would have been accessed on the data bus during a period in which a load or a store operation (or a similar operation in any pipelined system whether RISC or CISC having instructions which cause a stall due to bus contention) occupies the pipeline so that the instruction cannot be accessed in the normal one instruction per clock sequence.
  • the processor of the present invention is arranged so that the instruction may be accessed in the on-chip stall cache at the same instant that the data is being fetched on the data bus, essentially in parallel. Consequently, when the information is in the stall cache, no time is lost due to the load and store instruction delays. It has been found that the significant speed increases are attained simply by using such a cache because most programs are filled with a series of small looping processes which miss on a first loop but fill the cache so that on successive loops no delay is incurred in load and store operations and cache speeds are attained.
  • FIG. 7 is a block diagram illustrating a typical RISC computer system 30 designed in accordance with the present invention.
  • the system 30 includes a central processing unit 32, an external cache 33, and memory 34.
  • the three units are associated by data bus 36 and instruction bus 37.
  • the central processing unit 32 includes an arithmetic and logic unit 40, a register file 41 , an instruction decode/fetch instruction data unit 42, a bus interface 43, and a multiplexor 44 just as does the prior art processor illustrated in Figure 6.
  • the unit 32 includes a stall cache 46 arranged to receive and store those instructions which are delayed due to a fetch of data during a load or store operation.
  • the stall cache 46 is connected to the instruction decode/fetch instruction data unit 42 by the address bus and by an internal instruction bus 47 which provides a path for the transfer of instruction from the cache 46 during the period in which data is being transferred over the external data bus 36.
  • data may be provided to the register file by the data bus 36 via the multiplexor 44 while an instruction is accessed from the stall cache 46.
  • a bit may be set to indicate that the instruction that follows the data access is to be stored in the stall cache 46.
  • the instruction that follows the load instruction is the instruction three cycles later immediately after a data fetch for an instruction.
  • the instruction which has been stalled or delayed by the necessity to access the data is placed in the stall cache 46.
  • this is the instruction accessed four cycles later 5 immediately after two data fetch cycles have been completed.
  • the instruction will be present so that an external fetch need not occur; and the instruction may be accessed over the internal instruction bus simultaneously with the access of the data on the external data bus 36.
  • the stall cache will contain instructions that would have been fetched during a bus stall. There may be one, two, or more instructions for each instruction that caused a stall.
  • FIG. 4 illustrates a timing diagram showing the operation of a pipelined RISC computer processor designed in accordance with the invention while processing instructions which in addition to single cycle instructions include a load instruction.
  • a number of system clock cycles are illustrated along the top of the diagram commencing with cycle time 0 and continuing through clock cycle time 6.
  • Four instructions io through 13 are illustrated.
  • the assertion of the addresses of the instructions and the load address on the address bus and the assertion of the instructions and data on the data bus are illustrated below the steps of the instructions.
  • Figure 5 is a timing diagram illustrating the operation of the unit 32 in handling a store operation. Again it may be seen that no delay is incurred due to a stall condition, and the pipeline continues to execute instructions once every clock cycle.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Advance Control (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Developing Agents For Electrophotography (AREA)

Abstract

A computer processor (32) which utilizes a very small on-chip cache (46) for storing only those instructions which are not accessed because of a data access on the data bus (36) occurring during a load or a store instruction whereby the instruction is available to the processor upon the next data access during such an instruction without the delay of a bus access.

Description

APPARATUS AND METHOD FOR PROVIDING A STALL CACHE
BACKGROUND OF THE INVENTION
. Field of the Invention:
This invention relates to computer systems and, more particularly, to methods and apparatus for increasing the speed of operation of computer processors capable of providing pipelined operation.
2. History of the Prior Art:
The development of digital computers progressed through a series of stages beginning with processors that were able to process only a few basic instructions in which the programming needed to be done at a machine language level to processors capable of handling very complicated instructions written in high level languages. At least one of reasons for this development is that high level languages are easier for programmers to use; and, consequently, more programs are developed more rapidly. Another reason is that up to some point in their development, the more advanced machines executed operations more rapidly.
There came a point, however, where the constant increase in the ability of the computers to run more complicated instructions actually began to slow the operation of the computer over what investigators felt was possible with machines operating with only a small number of basic instructions. These investigators began to design advanced machines for running a limited number of instructions, a so-called reduced instruction set, and were able to demonstrate that these machines did, in fact, operate more rapidly for some types of operations. Thus began the reduced instruction set computer which has become known by its acronym, RISC. The central processing unit of the typical RISC computer is very simple. In general, it fetches an instruction every clock cycle. In its simplest embodiment, all instructions except for load and store instructions act upon internal registers within the central processing unit. A load instruction is used to fetch data from external memory and place it in an internal register, and a store instruction is used to take the contents of an internal register and place it in external memory.
One of the techniques utilized in RISC and other computers for obtaining higher speeds of operation is called pipelining. Processors utilized in computer systems to provide pipelined operations normally cycle through fetch, decode, execute, and write back steps of operation in executing each instruction. In a typical pipelined system, the individual instructions are overlapped so that an instruction executes once each clock cycle of the system. However, when the pipelined RISC computer is performing a load or store operation, a longer time is required because the typical system has only a single data bus that carries both instructions and data to off-chip memory. A load or a store operation requires that both the instruction and the data be moved on the bus. Consequently, at least two cycles are normally required.
This two cycle minimum time has been shortened in the prior art by employing the "Harvard" architecture in which separate buses and memories are used for instructions and data. Using this architecture, the processor can continue executing instructions while a load or a store instruction is being performed since there is a separate path for fetching instructions. However, with single chip integrated circuit implementations of RISC processors, a Harvard architecture requires that either the existing address and data pins to memory operate at twice the rate at which instructions are needed or twice the number of pins must be provided. In either case, twice the off-chip band width is required.
To avoid this significant change in off-chip bandwidth, some RISC chips provide an on-chip instruction cache so that an internal source of instructions is available and both data and instructions need not be on the bus at the same time. However, such an instruction cache usually requires a good deal of chip space and may not be feasible without radical overhaul of the architecture of a chip.
It is, therefore, very desirable that some means for eliminating the time cost of load and store instructions in pipelined systems, especially RISC systems, be provided. Moreover, this same problem may arise in any situation in which a pipelined processor of any sort must deal with an instruction that causes a stall due to a bus contention. For example, an instruction adding memory to a register might cause such a stall. Input/output controllers utilizing processors are often subject to such problems.
SUMMARY OF THE INVENTION
It is, therefore, an object of the present invention to provide some means for eliminating the time cost of load and store instructions in pipelined systems, especially RISC systems.
It is another more specific object of the present invention to provide a means for eliminating the time cost of load and store instructions in pipelined systems, especially RISC systems, which does not require extensive chip architecture modification.
Another object of the invention is to provide a means for eliminating the time cost of instructions which cause a stall due to bus contention in pipelined operations of processors.
These and other objects of the present invention are realized in a computer processor which utilizes a very small on-chip cache for storing only those instructions the access to which is delayed because of a data access on the data bus occurring during a load or a store instruction whereby the instruction is available to the processor upon the next data access during such an instruction without the delay of a bus access.
These and other objects and features of the invention will be better understood by reference to the detailed description which follows taken together with the drawings in which like elements are referred to by like designations throughout the several views.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a timing diagram illustrating a typical pipelined computer system executing a number of instructions that require a single system clock cycle to execute. Figure 2 is a timing diagram illustrating a typical pipelined computer system executing a number of instructions that require a single system clock cycle to execute and one load instruction that requires two system clock cycles to execute.
Figure 3 is a timing diagram illustrating a typical pipelined computer system executing a number of instructions that require a single system clock cycle to execute and one store instruction that requires three system clock cycles to execute.
Figure 4 is a timing diagram illustrating a pipelined computer system in accordance with the invention executing a number of instructions that require a single system clock cycle to execute and one load instruction.
Figure 5 is a timing diagram illustrating a pipelined computer system constructed in accordance with the invention executing a number of instructions that require a single system clock cycle to execute and one store instruction
Figure 6 is a block diagram illustrating a typical RISC computer system of the prior art.
Figure 7 is a block diagram illustrating a typical RISC computer system designed in accordance with the present invention. NOTATION AND NOMENCLATURE
Some portions of the detailed descriptions that follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art.
An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities.
Further, the manipulations performed are often referred to in terms, such as adding or comparing, which are commonly associated with mental operations performed by a human operator. No such capability of a human operator is necessary or desirable in most cases in any of the operations described herein which form part of the present invention; the operations are machine operations. Useful machines for performing the operations of the present invention include general purpose digital computers or other similar devices. In all cases the distinction between the method operations in operating a computer and the method of computation itself should be borne in mind. The present invention relates to apparatus and to method steps for operating a computer in processing electrical or other (e.g. mechanical, chemical) physical signals to generate other desired physical signals.
DETAILED DESCRIPTION OF THE INVENTION
Figure 1 illustrates a timing diagram showing the operation of a typical pipelined RISC computer processor while processing single cycle instructions. Single cycle instructions are those that when utilized in the system allow an instruction to be executed during each system clock cycle. The typical steps of executing an instruction are to fetch the instruction, to decode the instruction for its meaning and addresses involved, to execute the instruction, and to write the instruction back to the registers of the processor.
Figure 1 shows a number of clock cycles along the top of the diagram beginning at the clock cycle designated as time 0 and continuing through the clock cycle designated as time 6. Four instructions io through 13 are illustrated in the diagram. Each instruction illustrated includes a fetch, a decode, an execute, and a write back stage. The assertion of the addresses of the instructions on the address bus and the assertion of the instructions on the data bus are illustrated below the steps of the instructions.
As may be seen in Figure 1 , the steps of these instructions are overlapped so that after an interval of two clock cycles from the first fetch, one instruction is executing during each clock cycle. Thus at clock time 0, instruction io is fetched. At clock time 1 , instruction i is fetched, and instruction io is decoded. At clock time 2, instruction 12 is fetched, instruction 11 is decoded, and instruction io is executed. At clock time 3, instruction 13 is fetched, instruction 12 is decoded, instruction 11 is executed, and the results of instruction io are written back to registers. This execution of an instruction during each clock cycle continues for so long as the processor does not encounter a load or a store instruction. It will be recognized by those skilled in the art this is not as especially long period, for load and store instruction amount to an average of twenty percent of the instructions executed by a RISC computer.
Figure 2 illustrates a timing diagram showing the operation of a typical pipelined RISC computer processor while processing instructions which in addition to single cycle instructions include a load instruction. As in Figure 1 , a number of system clock cycles are illustrated along the top of the diagram commencing with cycle time 0 and continuing through clock cycle time 7. Four instructions io through 13 are illustrated along with a single data fetch required by the load instruction which is for this description considered to be instruction io- As with Figure 1 , the assertion of the addresses of the instructions and the load address on the address bus and the assertion of the instructions and data on the data bus are illustrated below the steps of the instructions.
The decoding of the instruction 10 at time 1 requires that the processor fetch data from memory during clock cycle time 3. The address from which the data is to be loaded is provided on the address bus during cycle time 2, and the data to be loaded is placed on the data bus during cycle time 3. Since both data and an instruction cannot be on the data bus at the same time, the system is required to stall the pipeline for one cycle while the data is placed on the data bus in the data fetch step shown as DF in the Figure 2. Once the data has been fetched and placed in the pipeline, the next instruction may be fetched. However, the execution of instructions has been delayed by one clock cycle so that instruction 13 is not complete until cycle time 7.
The reason for this delay is illustrated in the block diagram of Figure 6. The computer illustrated therein is a RISC computer 10 having a central processing unit 12 and external memory 14 joined to the central processing unit 12 by a data bus 15 and an address bus 17 from a cache memory 13. The central processing unit 12 includes an arithmetic and logic unit 19, a register file 20, an instruction decode/fetch instruction data unit 22, and a bus interface 23.
In operation, addresses are transferred between memory and the central processing unit 12 on the address bus 15 while both data and instructions are transferred between the unit 12 and the memory 14 on the data bus 17.
Instructions appear on the data bus 17 and are transferred to the instruction decode unit 22 while data appearing on the data bus 17 are transferred by a multiplexor 25 to the register file 20. Since instructions must use the same bus 17 as does the data, when an instruction requires the access of data from the memory 14, as do load and store instructions, only one of these operations can use the bus at once. Thus, it is apparent that the delay of the processing in the pipeline is due to the structure of the processor which provides only a single bus for both data and instructions.
Figure 3 illustrates a timing diagram showing the operation of a typical pipelined RISC computer processor while processing instructions which in addition to single cycle instructions include a store instruction. The computer may, for example, be one of the SPARC based computers manufactured by Sun Microsystems, Mountain View, California. Such computers normally require two clock cycles to transfer data during a store operation It is not, however, necessary either to the understanding of this invention or to its implementation that the system require two bus cycles for a store operation. As in Figures 1 and 2, a number of system clock cycles are illustrated along the top of the diagram commencing with clock cycle time 0 and continuing through clock cycle time 8. Four instructions io through 13 are illustrated along with a pair of data fetch cycles required by the store instruction which is for this description considered to be instruction io- The assertion of the addresses of the instructions and the store addresses on the address bus and the assertion of the instructions and data on the data bus are illustrated below the steps of the instructions.
The decoding of the instruction io at time 1 provides the store addresses which require that the processor transfer data to memory during two different cycles. Since both data and instructions cannot be on the data bus at the same time, the system is required to stall the pipeline for two clock cycles while the data is placed on the data bus in the data fetch steps shown as DF in the Figure 3. Once the data has been fetched and placed in the pipeline, the next instruction may be fetched. However, the execution of instructions has been delayed by two clock cycle so that instruction 13 is not complete until clock cycle time 8.
As pointed out above, the two cycle minimum time has been shortened in the prior art by providing an on-chip general purpose instruction cache so that an internal source of instructions is available and both data and instructions need not be on the bus at the same time. However, such an instruction cache usually requires a good deal of chip space and may not be feasible without radical overhaul of the architecture of the chip, a procedure which may be very expensive. The time has also been shortened by providing both an instruction and a data bus for the system so that instructions and data may be transferred in parallel. This also requires a good deal of chip architecture revision and significant expense.
The present invention overcomes this problem of the prior art in a different manner. The present invention provides a very small on-chip restricted-use cache that may be termed a stall cache for the purposes of this disclosure. This cache may have only a very few lines, for example, in a preferred embodiment the stall cache may be a fully-associative cache having a line size of one instruction so that no unnecessary instructions need be stored in the cache. A cache of this type having eight lines would require a very small amount of die space on the chip. Consequently, it might be placed on presently designed chips without requiring any substantial architecture modification. The cache might also be implemented in other forms having a larger line size and therefore storing more than load and store instructions. An N-way set associative cache might also be utilized.
The cache of this invention is somewhat like a target-of-a-branch cache which is also a restricted type of cache. However, a target-of-a-branch cache is utilized to hold instructions that are the targets of branches while the present invention utilizes a restricted cache to hold instructions involved in bus contentions.
The cache is arranged to hold instructions just as in a full sized instruction cache except that the instructions held are only those instructions that would have been accessed on the data bus during a period in which a load or a store operation (or a similar operation in any pipelined system whether RISC or CISC having instructions which cause a stall due to bus contention) occupies the pipeline so that the instruction cannot be accessed in the normal one instruction per clock sequence.
The processor of the present invention is arranged so that the instruction may be accessed in the on-chip stall cache at the same instant that the data is being fetched on the data bus, essentially in parallel. Consequently, when the information is in the stall cache, no time is lost due to the load and store instruction delays. It has been found that the significant speed increases are attained simply by using such a cache because most programs are filled with a series of small looping processes which miss on a first loop but fill the cache so that on successive loops no delay is incurred in load and store operations and cache speeds are attained.
Figure 7 is a block diagram illustrating a typical RISC computer system 30 designed in accordance with the present invention. The system 30 includes a central processing unit 32, an external cache 33, and memory 34. The three units are associated by data bus 36 and instruction bus 37. The central processing unit 32 includes an arithmetic and logic unit 40, a register file 41 , an instruction decode/fetch instruction data unit 42, a bus interface 43, and a multiplexor 44 just as does the prior art processor illustrated in Figure 6.
In addition, the unit 32 includes a stall cache 46 arranged to receive and store those instructions which are delayed due to a fetch of data during a load or store operation. As may be seen in Figure 7, the stall cache 46 is connected to the instruction decode/fetch instruction data unit 42 by the address bus and by an internal instruction bus 47 which provides a path for the transfer of instruction from the cache 46 during the period in which data is being transferred over the external data bus 36. Thus data may be provided to the register file by the data bus 36 via the multiplexor 44 while an instruction is accessed from the stall cache 46.
The manner in which this may be accomplished is relatively simple. When a load or a store instruction is encountered, a bit may be set to indicate that the instruction that follows the data access is to be stored in the stall cache 46. For o example, in a particular SPARC-based RISC system, the instruction that follows the load instruction is the instruction three cycles later immediately after a data fetch for an instruction. The instruction which has been stalled or delayed by the necessity to access the data is placed in the stall cache 46. For a store instruction, on the other hand, this is the instruction accessed four cycles later 5 immediately after two data fetch cycles have been completed. Thereafter, at the next access of the stall cache 46, the instruction will be present so that an external fetch need not occur; and the instruction may be accessed over the internal instruction bus simultaneously with the access of the data on the external data bus 36. Note that in this example shown, in which a store takes three cycles, two 0 instructions (13, 14) will be stored in the stall cache. In general, the stall cache will contain instructions that would have been fetched during a bus stall. There may be one, two, or more instructions for each instruction that caused a stall.
Another method of accomplishing the load of the instruction to the stall 5 cache 46 is to test the appearance of a data fetch instruction from the unit 42 and load the next instruction to appear to the stall cache 46. Figure 4 illustrates a timing diagram showing the operation of a pipelined RISC computer processor designed in accordance with the invention while processing instructions which in addition to single cycle instructions include a load instruction. As in the other timing diagrams, a number of system clock cycles are illustrated along the top of the diagram commencing with cycle time 0 and continuing through clock cycle time 6. Four instructions io through 13 are illustrated. The assertion of the addresses of the instructions and the load address on the address bus and the assertion of the instructions and data on the data bus are illustrated below the steps of the instructions.
Although a single data fetch is required by the load instruction which is for this description considered to be instruction io, no additional processor time is required. As may be seen, the address of the load instruction and the data necessary are placed on the data bus and the internal address bus simultaneously. Then the data and the instruction are fetched simultaneously using the external data bus 36 and the internal instruction bus. Consequently, no delay of the pipeline due to a stall condition is incurred.
Figure 5 is a timing diagram illustrating the operation of the unit 32 in handling a store operation. Again it may be seen that no delay is incurred due to a stall condition, and the pipeline continues to execute instructions once every clock cycle.
Although two data fetch cycles are required by the store instruction which is for this description considered to be instruction io, no additional processor time is required. As may be seen, the addresses of the store instruction and the data necessary are placed on the data bus and the internal address bus simultaneously. Then the data and the instruction are fetched simultaneously using the external data bus 36 and the internal instruction bus. Consequently, no delay of the pipeline due to a stall condition is incurred.
It has been found that the significant delay in such machines is caused not by the speed of operation of an external cache, but by the delays caused by the load and store operations. Consequently, the stall cache of this invention provides a significant speed increase over the prior art arrangements without significant architecture changes.
Although the present invention has been described in terms of a preferred embodiment, it will be appreciated that various modifications and alterations might be made by those skilled in the art without departing from the spirit and scope of the invention. The invention should therefore be measured in terms of the claims which follow.

Claims

WHAT IS CLAIMED IS:
1. In a computer processor comprising a central processing unit, a main memory, and a single external bus for transferring both instructions and data, the central processing unit comprising an on-chip cache for storing only those instructions which are not accessed because of a data access on the data bus occurring during a load or a store instruction, and an internal instruction bus for accessing the on-chip cache whereby an instruction is available to the processor upon the next data access during a load or a store operation without the delay of a bus access.
2 A computer system as claimed in Claim 1 where the number of cycles required to load or store data is one or more cycles.
3. A computer system comprising a processor capable of providing pipelined processing of instructions, certain of such instructions requiring a time longer for execution than the usual time required for execution of an instruction, at least one unit associated with the processor, and a bus over which information is transferred between the processor and the unit in response to the instructions processed by the processor, the processor comprising a cache for storing only those instructions which are not accessed because of a stall of the pipeline caused by a bus contention, the processor further comprising an internal instruction bus for accessing the cache whereby an instruction stored therein is available to the processor upon the next access during an instruction previously causing the bus contention.
4. A computer system as claimed in Claim 2 in which the processor controls input/output operations.
5 A computer system as claimed in Claim 1 in which the processor is a central processing unit, and where the number of cycles required to load or store data is one or more cycles.
PCT/US1991/001292 1990-03-15 1991-02-28 Apparatus and method for providing a stall cache WO1991014225A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
DE69130233T DE69130233T2 (en) 1990-03-15 1991-02-28 METHOD AND DEVICE FOR INSERTING A LOCKING CACHE
KR1019920702194A KR100210205B1 (en) 1990-03-15 1991-02-28 Apparatus and method for providing a stall cache
EP91906378A EP0592404B1 (en) 1990-03-15 1991-02-28 Apparatus and method for providing a stall cache
JP3506054A JP2879607B2 (en) 1990-03-15 1991-02-28 Apparatus and method for providing stall cache

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US49389890A 1990-03-15 1990-03-15
US493,898 1990-03-15

Publications (1)

Publication Number Publication Date
WO1991014225A1 true WO1991014225A1 (en) 1991-09-19

Family

ID=23962168

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1991/001292 WO1991014225A1 (en) 1990-03-15 1991-02-28 Apparatus and method for providing a stall cache

Country Status (7)

Country Link
US (1) US5404486A (en)
EP (1) EP0592404B1 (en)
JP (1) JP2879607B2 (en)
KR (1) KR100210205B1 (en)
AU (1) AU7486591A (en)
DE (1) DE69130233T2 (en)
WO (1) WO1991014225A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997014099A1 (en) * 1995-10-12 1997-04-17 Analog Devices, Inc. Digital signal processor with caching of instructions that produce a memory conflict
CN101178644B (en) * 2006-11-10 2012-01-25 上海海尔集成电路有限公司 Microprocessor structure based on sophisticated instruction set computer architecture

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0601715A1 (en) * 1992-12-11 1994-06-15 National Semiconductor Corporation Bus of CPU core optimized for accessing on-chip memory devices
JP3304577B2 (en) * 1993-12-24 2002-07-22 三菱電機株式会社 Semiconductor memory device and operation method thereof
US5535358A (en) * 1993-12-27 1996-07-09 Matsushita Electric Industrial Co., Ltd. Cache memory control circuit and method for controlling reading and writing requests
US5577230A (en) * 1994-08-10 1996-11-19 At&T Corp. Apparatus and method for computer processing using an enhanced Harvard architecture utilizing dual memory buses and the arbitration for data/instruction fetch
GB2293670A (en) * 1994-08-31 1996-04-03 Hewlett Packard Co Instruction cache
US5802564A (en) * 1996-07-08 1998-09-01 International Business Machines Corp. Method and apparatus for increasing processor performance
US5652774A (en) * 1996-07-08 1997-07-29 International Business Machines Corporation Method and apparatus for decreasing the cycle times of a data processing system
US5793944A (en) * 1996-09-13 1998-08-11 International Business Machines Corporation System for restoring register data in a pipelined data processing system using register file save/restore mechanism
US5875346A (en) * 1996-09-13 1999-02-23 International Business Machines Corporation System for restoring register data in a pipelined data processing system using latch feedback assemblies
US5809528A (en) * 1996-12-24 1998-09-15 International Business Machines Corporation Method and circuit for a least recently used replacement mechanism and invalidated address handling in a fully associative many-way cache memory
US6549985B1 (en) * 2000-03-30 2003-04-15 I P - First, Llc Method and apparatus for resolving additional load misses and page table walks under orthogonal stalls in a single pipeline processor
US8725991B2 (en) * 2007-09-12 2014-05-13 Qualcomm Incorporated Register file system and method for pipelined processing
JP5817603B2 (en) * 2012-03-15 2015-11-18 富士通株式会社 Data creation device, data creation method, and data creation program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4811215A (en) * 1986-12-12 1989-03-07 Intergraph Corporation Instruction execution accelerator for a pipelined digital machine with virtual memory
US4872111A (en) * 1986-08-27 1989-10-03 Amdahl Corporation Monolithic semi-custom IC having standard LSI sections and coupling gate array sections
US4888689A (en) * 1986-10-17 1989-12-19 Amdahl Corporation Apparatus and method for improving cache access throughput in pipelined processors
US4912633A (en) * 1988-10-24 1990-03-27 Ncr Corporation Hierarchical multiple bus computer architecture
US4920477A (en) * 1987-04-20 1990-04-24 Multiflow Computer, Inc. Virtual address table look aside buffer miss recovery method and apparatus

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4437149A (en) * 1980-11-17 1984-03-13 International Business Machines Corporation Cache memory architecture with decoding
US4947316A (en) * 1983-12-29 1990-08-07 International Business Machines Corporation Internal bus architecture employing a simplified rapidly executable instruction set
US4734852A (en) * 1985-08-30 1988-03-29 Advanced Micro Devices, Inc. Mechanism for performing data references to storage in parallel with instruction execution on a reduced instruction-set processor
EP0227319A3 (en) * 1985-12-26 1989-11-02 Analog Devices, Inc. Instruction cache memory
US4722050A (en) * 1986-03-27 1988-01-26 Hewlett-Packard Company Method and apparatus for facilitating instruction processing of a digital computer
US4811208A (en) * 1986-05-16 1989-03-07 Intel Corporation Stack frame cache on a microprocessor chip
US4933837A (en) * 1986-12-01 1990-06-12 Advanced Micro Devices, Inc. Methods and apparatus for optimizing instruction processing in computer systems employing a combination of instruction cache and high speed consecutive transfer memories
US4851990A (en) * 1987-02-09 1989-07-25 Advanced Micro Devices, Inc. High performance processor interface between a single chip processor and off chip memory means having a dedicated and shared bus structure
CA1327080C (en) * 1987-05-26 1994-02-15 Yoshiko Yamaguchi Reduced instruction set computer (risc) type microprocessor
US4894772A (en) * 1987-07-31 1990-01-16 Prime Computer, Inc. Method and apparatus for qualifying branch cache entries
US5136696A (en) * 1988-06-27 1992-08-04 Prime Computer, Inc. High-performance pipelined central processor for predicting the occurrence of executing single-cycle instructions and multicycle instructions
US5006980A (en) * 1988-07-20 1991-04-09 Digital Equipment Corporation Pipelined digital CPU with deadlock resolution
US5050068A (en) * 1988-10-03 1991-09-17 Duke University Method and apparatus for using extracted program flow information to prepare for execution multiple instruction streams
US5027270A (en) * 1988-10-11 1991-06-25 Mips Computer Systems, Inc. Processor controlled interface with instruction streaming

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4872111A (en) * 1986-08-27 1989-10-03 Amdahl Corporation Monolithic semi-custom IC having standard LSI sections and coupling gate array sections
US4888689A (en) * 1986-10-17 1989-12-19 Amdahl Corporation Apparatus and method for improving cache access throughput in pipelined processors
US4811215A (en) * 1986-12-12 1989-03-07 Intergraph Corporation Instruction execution accelerator for a pipelined digital machine with virtual memory
US4920477A (en) * 1987-04-20 1990-04-24 Multiflow Computer, Inc. Virtual address table look aside buffer miss recovery method and apparatus
US4912633A (en) * 1988-10-24 1990-03-27 Ncr Corporation Hierarchical multiple bus computer architecture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP0592404A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997014099A1 (en) * 1995-10-12 1997-04-17 Analog Devices, Inc. Digital signal processor with caching of instructions that produce a memory conflict
US5717891A (en) * 1995-10-12 1998-02-10 Analog Devices, Inc. Digital signal processor with caching of instructions that produce a memory conflict
CN101178644B (en) * 2006-11-10 2012-01-25 上海海尔集成电路有限公司 Microprocessor structure based on sophisticated instruction set computer architecture

Also Published As

Publication number Publication date
DE69130233T2 (en) 1999-05-20
KR930700907A (en) 1993-03-16
EP0592404A4 (en) 1993-10-26
KR100210205B1 (en) 1999-07-15
EP0592404A1 (en) 1994-04-20
DE69130233D1 (en) 1998-10-22
JP2879607B2 (en) 1999-04-05
EP0592404B1 (en) 1998-09-16
AU7486591A (en) 1991-10-10
US5404486A (en) 1995-04-04
JPH05506323A (en) 1993-09-16

Similar Documents

Publication Publication Date Title
US5235686A (en) Computer system having mixed macrocode and microcode
US5127091A (en) System for reducing delay in instruction execution by executing branch instructions in separate processor while dispatching subsequent instructions to primary processor
AU628163B2 (en) Method and apparatus for detecting and correcting errors in a pipelined computer system
US4648034A (en) Busy signal interface between master and slave processors in a computer system
EP0592404B1 (en) Apparatus and method for providing a stall cache
KR920006275B1 (en) Data processing apparatus
US4305124A (en) Pipelined computer
US4541045A (en) Microprocessor architecture employing efficient operand and instruction addressing
US5446849A (en) Electronic computer which executes squash branching
EP0507210B1 (en) A data processing system for performing square operations with improved speed and a method therefor
US5041968A (en) Reduced instruction set computer (RISC) type microprocessor executing instruction functions indicating data location for arithmetic operations and result location
US5416911A (en) Performance enhancement for load multiple register instruction
US4747045A (en) Information processing apparatus having an instruction prefetch circuit
EP0279953B1 (en) Computer system having mixed macrocode and microcode instruction execution
US4969117A (en) Chaining and hazard apparatus and method
JPS63116236A (en) Information processor
EP0385136B1 (en) Microprocessor cooperating with a coprocessor
US7020769B2 (en) Method and system for processing a loop of instructions
US6044460A (en) System and method for PC-relative address generation in a microprocessor with a pipeline architecture
US5293499A (en) Apparatus for executing a RISC store and RI instruction pair in two clock cycles
JPH04215129A (en) Method and apparatus for executing continuous command
US4935849A (en) Chaining and hazard apparatus and method
EP0448127B1 (en) Microprogram sequence controller
EP1177499A1 (en) Processor and method of executing instructions from several instruction sources
EP0015276B1 (en) A digital pipelined computer

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA JP KR

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LU NL SE

WWE Wipo information: entry into national phase

Ref document number: 1991906378

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: CA

WWP Wipo information: published in national office

Ref document number: 1991906378

Country of ref document: EP

WWG Wipo information: grant in national office

Ref document number: 1991906378

Country of ref document: EP