CN1267024A - Command cache for multiple thread processor - Google Patents

Command cache for multiple thread processor Download PDF

Info

Publication number
CN1267024A
CN1267024A CN 00101695 CN00101695A CN1267024A CN 1267024 A CN1267024 A CN 1267024A CN 00101695 CN00101695 CN 00101695 CN 00101695 A CN00101695 A CN 00101695A CN 1267024 A CN1267024 A CN 1267024A
Authority
CN
China
Prior art keywords
instruction
array
subclauses
clauses
address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 00101695
Other languages
Chinese (zh)
Other versions
CN1168025C (en
Inventor
理查德·威廉·杜英
罗纳德·尼克·凯拉
斯蒂芬·约瑟夫·施文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/266,133 external-priority patent/US6161166A/en
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Publication of CN1267024A publication Critical patent/CN1267024A/en
Application granted granted Critical
Publication of CN1168025C publication Critical patent/CN1168025C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Abstract

A multithreaded processor includes a level one instruction cache shared by all threads. The I-cache is accessed with an instruction unit generated effective address, the I-cache directory containing real page numbers of the corresponding cache lines. A separate line fill sequencer exists for each thread. Preferably, the I-cache is N-way set associative, where N is the number of threads, and includes an effective-to-real address table (ERAT), containing pairs of effective and real page numbers. ERAT entries are accessed by hashing the effective address. The ERAT entry is then compared with the effective address of the desired instruction to verify an ERAT hit.

Description

The command cache that is used for multi-theread processor
The application generally transfers the possession of the continuation part of common unsettled u.s. patent application serial number 08/966,706, and in application in November 10 in 1997, title was that " effectively to the equipment and the method for real address high-speed cache management, it is the list of references of this paper.
The application is also and following generally to transfer the possession of unsettled jointly U.S. Patent application relevant, and they all are the lists of references of this paper:
Sequence number 08/976,533, application in what on November 21st, 1997, title is " in the multithreading data handling system from many clauses and subclauses total correlation cache buffer access data ".
Sequence number 08/958,718, application on October 23rd, 1997, title is " changing thread priority in multi-theread processor ".
Sequence number 08/958,716, application on October 23rd, 1997, title is for " to be used to select the method and apparatus of thread switch events in multi-theread processor.
Sequence number 08/957,002, application on October 23rd, 1997, title is " the thread switching controls in the multi-theread processor system ".
Sequence number 08/956,875, application on October 23rd, 1997, title is " equipment and the method that guarantee the forward progress in the multi-theread processor ".
Sequence number 08/956,577, application on October 23rd, 1997, title is " method and apparatus of forcing thread to switch in multi-theread processor ".
Sequence number 08/773,572, on Dec 27th, 1996 application, title is " finish on the backstage of the instruction and the relevant request of getting in the multi-theread processor ".
Relate generally to numerical data of the present invention is handled, and relates in particular to the command cache that is used for providing to the processing unit of digital computing system instruction.
Modem computer systems generally includes CPU (central processing unit) (CPU) and storage, retrieval and the required support hardware of transmission information, for example communication bus and storer.It also comprises the hardware that needs with extraneous signal post, for example i/o controller or memory controller, and connect thereon parts, as the communication line of keyboard, monitor, magnetic tape station, disk drive and networking, or the like.CPU is the heart of system.It carries out instruction that constitutes computer program and the operation of instructing other system unit.
From the angle of computer hardware, most systems is moved in substantially the same mode.Handle function and carry out limited very simple calculations collection, for example, arithmetic, logic relatively and data from a cell moving to another unit.But carry out each computing with very fast speed.The program that instructs computing machine to carry out these a large amount of simple operations provides the illusion that computing machine is done things intricately " the new or improved ability of the computer system that may reach by carrying out substantially the same very simple calculations collection, only be to use family sensation computer working to get faster.Thereby, the Continual Improvement of computer system is required to make these systems also faster.
The general speed of computer system (also being called " handling capacity ") can measure with the operation times of carrying out in the unit time roughly.Conceptive, the straightforward procedure in all possible improvement system speed method is to improve the clock rate of clock rate, the especially processor of each parts.For example, if everything Doppio movimento ground moves and others work fully as usual, system can finish given task with the time of half.The early stage computer processor that constitutes by many discrete devices easily by reduction of device size, reduce number of devices and the integrated circuit that finally the entire process machine is assembled on the monolithic chip improves speed significantly.The feasible clock speed that might improve processor of the minimizing of size, thus system speed improved.
Although obtain very big improvement on the speed by integrated circuit, to speed faster the demand of computer system still exist.By bigger integrated (promptly increasing the circuit quantity that encapsulates on the monolithic chip), by further reducing circuit size and by various other technology, hardware designer can obtain the further improvement on the speed.Yet the deviser can not ad infinitum constantly reduce physical size as can be seen, and the ability that their continuation improves the clock rate of processor is restricted.Thereby notice redirect on the method for other general speed that further improves computer system.
Under the situation that does not change clock rate, might improve the handling capacity of system by using a plurality of processors.The inexpensive price of each processor that assembles on integrated circuit (IC) chip becomes a reality this.Although adopt a plurality of processors to have certain potential benefit, also introduced additional architectural question.When this not being furtherd investigate, still can find out the reason of the speed that has many each CPU of improvement, no matter system bus adopts a plurality of CPU still to adopt single cpu.If cpu clock speed is given, might improve the speed of each CPU by the par that improves the computing of carrying out in each clock period, that is, and the quantity of the computing that carry out p.s..
In order to improve CPU speed, in the high performance processor design, usually adopt the instruction flow line line technology, and one or more layers cache.Instruction pipeline is carried out and is allowed to begin to carry out follow-up instruction before the instruction that finishes previous issue.The frequent data of using of cache memories stores and other be more near the data of processor, and in most of the cases allow to wait for to execution command constantly under whole access times of primary memory.
Streamline can be ineffective in some cases, and the result's of an instruction that depends on uncompleted previous scheduling still instruction can cause streamline ineffective.For example, depend on pack into/storage instruction but this pack into/storage instruction in required data before in cache, obtaining these data, can not be performed in the instruction of cache (being cache miss).Keeping carrying out necessary necessary data in the cache continuously and keeping high hit rate is not trifling minor matter, especially when calculating related to big data structure, wherein hit rate was requests for data quantity and data can easily obtain number of times in cache a ratio.Cache miss can make streamline lose efficacy several cycles, and if most of times can not obtain data then the total amount of memory latency time will be serious.Although the memory member that primary memory uses becomes faster, the gaps between their growth rates between sort memory chip and the advanced processes machine become and increase day by day.Thereby the considerable execution time in the current advanced processes machine design spends in and solves on the cache miss.
As can be seen, reduce processor and wait for some incident, for example heavily fill out certain streamline or from the memory search data, the time that is spent can increase the average operation quantity in per clock period.A kind of invention on the architecture of this problem is called as " many (execution) thread ".This technology relates to working load is divided into a plurality of independently executable instruction sequences, is called a plurality of (execution) thread.CPU keeps the state of a plurality of threads at any time.Thereby it is switch threads relatively simply and apace.
Be different from the software use to this term in the definition of Computer Architecture bound pair " multithreading " term, it means task division is become a plurality of relevant threads under one situation of back.In architectural definition, thread may be independently.Thereby, usually use " hardware multithreading " to distinguish two kinds of uses of this term.In this article, " multithreading " refers to hardware multithreading.
There are two kinds of citation forms in multithreading.Under more traditional form, promptly be sometimes referred to as under " particulate multithreading ", processor on the basis in cycle one by one by N thread of stack execution concurrence ground execution.Set up at interval between execution of this each instruction in single thread, it makes processor no longer need to wait for the stand-by period incident of some short-term, for example heavily filling out instruction pipelining.Under the multithreading of the second kind of form that is sometimes referred to as " coarse grain multithreading ", one after the other carry out many instructions in the single route, run into some longer stand-by period incident, for example cache miss until processor.
Usually, multithreading is related to each thread replication processes machine registers, to keep the state of a plurality of threads.For example, for trade (brand) name Power PC TMThis architecture of the realization of following sale is to carry out the processor of multithreading, and it must keep N state with N thread of operation.Thereby, duplicate N time following: general-purpose register, flating point register, condition distinguishing register, floating-point status and control register, counter register, link register, exception register, preservation/recovery register and special register.In addition, can duplicate such as the section look-aside buffer or can utilize each clauses and subclauses of thread number mark, and must under each the switching, clean if do not do so then.In addition, also should duplicate some branch prediction mechanism, for example related register and return stack.
Usually, do not duplicate bigger hardware configuration, as layer one command cache, layer one data cache, functional unit or performance element.Under all identical situation of all other conditions, duplicate bigger hardware configuration and may have some performance improvement benefit.Yet any this method need be traded off between benefit that increases and required additional firmware.Cache store exists and occupies sizable area on the processor slice, and these area instincts have purposes in addition.Thereby, must select the size of cache and the quantity and the function of cache carefully.
For high-performance designs, usually layer one command cache (L1 command cache) is inserted on the processor slice.The L1 command cache is used to keep the instruction that is considered to carry out in the immediate future possibly.
The L1 command cache is being used for also will be concerned about other thing under the situation of multi-theread processor.Should there be the quick switching of supporting thread under the undue contention in the instruction cache buffer memory device between the thread.A kind of method of avoiding contention is to make each thread have independently command cache, but this understands the hardware of consume valuable and makes the cache that respectively is used for single thread too little.Be preferably under the undue contention that does not have between the thread and share single L1 command cache by all threads.Also wish to make cache accessing mechanism avoid using slow address transition mechanism down.
For the operation of high-speed handler, the design of L1 command cache is critical.If L1 command cache miss rate height or access time are too slow or different threads between have excess competition or be difficult to keep the correlativity of cache, processor can spend the time of pending bar down instruction such as undue.The Continual Improvement of processor requires the L1 command cache to solve these relevant issues effectively, especially under multi-thread environment.
Thereby an object of the present invention is to provide a kind of follow-on processor equipment.
Another object of the present invention provides a kind of modified instruction cache equipment that is used for multi-theread processor.
Another object of the present invention is to reduce the contention between the thread in the multi-theread processor when the access instruction Cache.
A kind of multi-theread processor comprises layer one command cache of being shared by all threads.This L1 command cache comprises a directory array and one by the high-speed cache instruction array, and the two is shared by all threads and by hash function visit from the effective address of required instruction of structure.Each directory array clauses and subclauses comprises and by at least a portion of the real address of the cache line correspondence in the high-speed cache instruction array, the complete real address of certain instruction from wherein might the export Cache device.Exist a row that independently is used for each route to fill sequencer, make to be that a thread satisfies capable fillings of Cache and asks in another thread accesses cache entry, perhaps might be the thread of the carrying out several rows of looking ahead.
In a preferred embodiment, these arrays are divided into a plurality of groups, and each group has the clauses and subclauses (N road link Cache) of a Hash functional value correspondence relevant with.In this embodiment, processor is two separate threads hold mode information, and the command cache array is divided into two groups, although can adopt different thread of quantity and Cache correlativity.Since each thread can visit independently different have a same Hash value but belong to not on the same group by the instruction of high-speed cache, reduced the contention between the different threads.
Command cache preferably includes an effective address to real address conversion table (ERAT), and it serves as the ATC device that is used for primary memory.ERAT comprises that the real address part of a plurality of effective address parts and correspondence is right.Then the effective address of part of the effective address in the ERAT clauses and subclauses and required instruction is compared with checking ERAT and hit.Corresponding real address part and the real address part in the directory array are relatively with the checking cache hit.
Preferably respond cache miss operation row and fill sequencer, wherein have ERAT clauses and subclauses for the effective address of being asked (ERAT hits).In this case, can set up the complete real address of required instruction from the information effective address and the ERAT, and needn't visit the slower address transition mechanism that is used for primary memory.Utilize the real address of setting up, row is filled the sequencer direct access storage device.
Because each thread has an independently row filling sequencer, each thread can needn't satisfy Cache mutually independently with waiting for and insert request.In addition, because the command cache index comprises and the real page of certain clauses and subclauses correspondence number, this has simplified the Cache correlativity.In addition, utilize ERAT effective page number and the real page number related visit of having avoided storer required in many situations to change the mechanism to slower.At last, the n road of the Cache character that links can make all threads use a public Cache not existing under the undue thread contention.
Other purpose of the present invention, characteristic and characteristics, in the structure about method, operation and the function of parts, combination of each several part or the like, the part of capital this instructions of formation below clear to becoming in each DETAILED DESCRIPTION OF THE PREFERRED and the accompanying drawing, wherein identical reference number is represented the counterpart among each figure.
Figure 1A is the high-level block diagram of the main hardware member of the computer system of according to illustrated the preferred embodiments of the present invention herein with single cpu.
Figure 1B is the high-level block diagram of the main hardware member of the computer system of according to illustrated the preferred embodiments of the present invention herein with a plurality of CPU.
Fig. 2 is the high-level diagram according to the CPU (central processing unit) of the computer system of the preferred embodiment.
Fig. 3 illustrates the main member according to the L1 command cache of the preferred embodiment.
Fig. 4 at length illustrates according to the effective address of the preferred embodiment and shows to the real address and relevant control structure.
Fig. 5 at length illustrates according to L1 command cache directory array of the preferred embodiment and relevant control structure.
Fig. 6 at length illustrates L1 command cache instruction array and the relevant control structure according to the preferred embodiment.
Fig. 7 illustrates the capable main control logic circuit of inserting of generation Cache according to the preferred embodiment.
Fig. 8 logically represents the address translation according to the preferred embodiment.
Be used to adopt main hardware parts shown in Figure 1A according to single CPU computer system of the command cache architecture of the preferred embodiments of the present invention.The CPU101 that is used for processing instruction comprises independently inner layer one command cache (L1 command cache) 106 and layer one data cache (L1 data cache) 107.The instruction that the storage of L1 command cache is carried out by CPU101.The data that the storage of L1 data cache is handled by CPU101 (rather than instruction).CPU101 108 is connected with layer two Cache (L2 Cache), the latter can be used for hold instruction and data the two.Memory bus 109 is transmitting data between L2 Cache 108 and the primary memory 102 or between CPU101 and primary memory 102.CPU101, L2 Cache 108 and primary memory 102 are also communicated by letter with system bus 110 by bus interface 105.Various I/O processing units (IOP) 111-115 is connected on the system bus and support and various memory unit and I/O parts, direct access memory unit (DASD), magnetic tape station, workstation, printer and the being used for telecommunication line of communicating by letter for example with remote units or other computer system, communication.
Should understand the main representative components that is intended that descriptive system 100 on high level of Figure 1A, and the quantity of these parts and type are variable.Especially, system 100 can comprise a plurality of CPU.In Figure 1B, describe such multi-CPU system.Figure 1B illustrates a system with four CPU 101A, 101B, 101C, 101D, and each CPU has L1 command cache 106A, 106B, 106C, 106D and separately L1 data cache 107A, 107B, 107C, 107D separately respectively.Independently L2 cache 108A, 108B, 108C, 108D's each CPU link to each other with one respectively.
In the preferred embodiment, each CPU can keep the state of two threads, and switches execution under some stand-by period incident between thread.That is, CPU carries out single thread (active line journey) until running into the stand-by period incident (a kind of form of coarse grain multithreading) that some forces CPU to wait for.Yet, should understand the present invention and can implement with the thread state of the varying number among each CPU, and might be on the basis in cycle one by one staggered instruction (particulate multithreading, the perhaps switch threads of carrying out from each thread on other different bases.
Fig. 2 is the high-level diagram of the critical piece of CPU 101, to illustrate in greater detail the CPU 101 according to this embodiment that is described among Figure 1A and the 1B.In this enforcement case, the parts shown in the assembly drawing 2 on single semiconductor chip.CPU 101 comprises instruction sheet bit position 201, performance element part 211 and storage control section 221.Usually, command unit 201 from L1 command cache 106 instructed, translation instruction is with the operation determining to carry out and solve jump condition and flow with control program.Data in 211 pairs of registers of performance element are carried out arithmetical operation and logical operation, and load or the storage data.Data in the storage control unit 221 visit L1 data cache or with the CPU outside must be from its access instruction or memory of data interface.
Command unit 201 comprises buanch unit 202, impact damper 203,204,205, and decoding/scheduling unit 206.Be encased in three impact dampers one from the instruction of L1 command cache 106 from L1 command cache instruction bus 232.Sequence buffer device 203 is by 16 instructions of current execution sequence storage.8 instructions of transition buffer 205 storages from transfer destination ground; These 8 instructions are encased in impact damper 205 abstractively before shifting evaluation under failover events.Thread switches the instruction of 8 non-active line journeys of impact damper 204 storages; Need not under the thread switch events of active line journey, can obtain these instructions to this immediately from current active line journey.The present instruction that reception will carry out of decoding/scheduling unit 206 from impact damper, and decipher this instruction to judge operation or the jump condition that will carry out.Carry and move unit 202, and heavily fill out each impact damper from L1 command cache 106 by the effective address that on L1 command cache address bus 231, sends required instruction by assessment jump condition control program stream.
Performance element 211 comprises S streamline 213, M streamline 214, R streamline 215 and one group of general-purpose register 217.Register 217 is divided into two groups, one group of register of each thread.The R streamline is an arithmetic pipeline unit, the logical function that is used to carry out the subclass of integer arithmetic and carries out simple integer.M streamline 214 is arithmetic pipeline unit that are used to carry out a bigger arithmetic sum logical function collection.S streamline 213 is one and is used to carry out the pipelined units of packing into storage operation.Floating point unit 212 is used for the complicated floating-point operation that some typically needs a plurality of cycles with relevant flating point register 216.Be similar to general-purpose register 217, flating point register 216 also is divided into two groups, one group of register of each thread.
Storage control unit 221 comprises Memory Management Unit 222, L2 Cache catalogue 223, L2 Cache interface 224, L1 data cache 107 and memory bus interface 225.The L1 data cache is the monolithic Cache (opposite with instruction) that is used for data.L2 Cache catalogue 223 is catalogues of the content of L2 Cache 108.L2 Cache 224 is handled directly to transmitting with data from L2 Cache 108.Data on the memory bus interface 225 processing memory buses 109 transmit, this transmission may be to primary memory 102 or to the L2 cache unit relevant with other CPU.The Route Selection that Memory Management Unit 222 is responsible for the data access of each unit.For example, when S streamline 213 was handled the order request of packing into and data is encased in certain register, Memory Management Unit may be got this data from L1 data cache 107, L2 Cache 108 or primary memory 102.Memory Management Unit 222 is determined from where obtaining this data.Whether L1 data cache 107 and L2 Cache catalogue 223 the same can directly visits be so that make unit 222 decision data among L1 data cache 107 or L2 Cache 108.If data neither in monolithic L1 data cache also not in L2 Cache 108, then utilize memory interface 225 to get this data from memory bus.
Although explanation and show various CPU members on high level, the CPU that should understand preferred embodiment comprises that many other is unshowned to understanding the optional parts of the present invention.For example, for example, can need various additional special registers in typical design, wherein some are necessary for each thread and duplicate.The quantity, type and the layout that should also be understood that the parts in the CPU101 are variable.For example, can change the quantity and the configuration of impact damper and Cache; Can change the quantity and the function of performance element streamline; Can be in different arrays or different group configuration register; Can there be or do not have special-purpose floating-point processing hardware; Or the like.
Ideally, command unit 201 is provided in the code translator 206 lasting instruction stream decoding and that carried out by performance element 211.L1 command cache 106 must respond request of access under the minimum delay.In requested instruction in fact under the situation in the L1 command cache, it must not require under the situation that code translator/scheduler 206 do not wait for and does response and fill suitable impact damper.Can not respond at the L1 command cache under the situation of (being that requested instruction is not in the L1 command cache), must take to fill the longer path of bus 233 by Memory Management Unit 222 through Cache.In this case, may from L2 Cache 108, from primary memory 102 or may from the dish or other storer obtain this instruction.When system 100 comprises a plurality of processor, also might obtain this instruction from the L2 Cache of other processor.In all these situations, get the required delay of this instruction from remote location and may make command unit 201 switch threads.That is, the active line journey becomes not used, and that before not used thread becomes is used, and command unit 201 begins to handle at thread and switches this before not used thread that keeps in the impact damper 204.
Fig. 3 illustrates in greater detail critical piece according to the L1 command cache 106 of the preferred embodiment than Figure 1A, 1B or 2.L1 command cache 106 comprises that effective address shows (ERAT) 301, command cache directory array 302 and command cache instruction array 303 to the real address.303 storages of command cache instruction array offer command unit 201 for the actual instruction of carrying out.Command cache directory array 302 comprises being used for supervisory instruction array 303 especially judges at instruction array 303 whether in fact have the real page number of required instruction, the effectively set of hyte and out of Memory.ERAT301 comprises that a plurality of effective page number and real page are number right, are used for effective address and real address are associated.
The CPU 101 of the preferred embodiment supports multistage address translation, as logically illustrating among Fig. 8.These three basic addressing structures are effective address 801, virtual address 802 and real address 803." effective address " refer to by command unit 201 generate to locate the address of certain instruction.That is, it is the address from user's executable code angle.In can the whole bag of tricks from known technology any generates effective address, for example, by connect some position, high address (it does not change continually, for example when starting the execution of new task) in the special register and instruct in the low order address position; By calculating skew to the address in the general-purpose register; By skew to the instruction of current execution; Or the like.In the present embodiment, effective address comprises 64, numbering from 0 to 63 (0 is most significant digit)." virtual address " is the operating system thinking structure, is used to isolate the address space of different user.That is, if each user can quote the effective address of gamut, then must be in the effective address spatial alternation to of the different user bigger virtual address space for avoiding conflict.Virtual address is not the physical entity of storing in register in this sense; It is a kind of logical organization, is by 28 low levels of 52 virtual segment ID814 and effective address being coupled together generation, 80 altogether." real address " refers to the physical location of this instruction of storage in the storer 102.The real address is formed by 40, numbers into 24 to 63 (24 is most significant digit).
As shown in Figure 8, effective address 801 comprises 36 effective section ID811,16 page number 812 and 12 byte index 813, and effectively section ID occupies upper level position.Have page number 812 and byte index 813 constitutes virtual address 802 from effective address by 36 effective section ID811 being transformed into 52 virtual segment ID814 and connecting.Derive real address 803 by virtual segment ID814 and page number 812 being transformed into 52 real pages numbers 815 and this real page number and byte index 813 being coupled together from virtual address.Because the page of primary memory comprises 4K (promptly 212) byte, the interior address of page of byte index 813 (12 minimum address bits) regulation, no matter and the address be effective, virtual or the real address it all be identical.The page stipulated by a high position, thereby sometimes these high positions called " effectively page number " or " real page number ", and this is determined on a case-by-case basis.
Computer system 100 comprises an address transition mechanism that is used for the effective address that CPU101 generates is converted to the real address of storer 102.This address transition mechanism comprises a segment table mechanism 821 that is used for effective section ID811 is transformed into virtual segment ID814, and a page table mechanism 822 that is used for virtual segment ID814 and page number 812 are transformed into real page numbers 815.Though for illustrative purpose these mechanism table are shown as single entity in Fig. 8, in fact they are made up of a plurality of tables or register on not at the same level.That is, resident complete page table and complete segment table in primary memory 102 comprise simultaneously the different less by the part of high-speed cache of these data in showing in CPU101 itself or in the L2 Cache.Under some condition of limited, there is the additional (not shown) of changing the mechanism that directly is transformed into the real address from effective address.
CPU101 also supports more simple addressing when supporting the address translation shown in Fig. 8.Particularly, the CPU101 of the preferred embodiment can be in the operation down of one of " mark is now used " pattern or " mark is not now used " pattern.These different patterns mean different addressing, and are used to support different operating system.An operational mode that record is current in the machine status register(MSR) (special register).Complete addressing conversion described above is used under " mark is not now used " pattern.Under " mark is now used " pattern, effective address identical with virtual address (that is, effectively section ID811 needn't search and directly transform to virtual segment ID814 from effective section ID811, thereby 16 high positions of virtual segment ID all are 0).CPU101 also may operate at effectively=real addressing mode under (explained later).
As see, address translation from the effective address to the real address needs multistage table to search.In addition, the some parts of address mapping mechanism be positioned at outside the cpu chip and relevant with storer 102 and visit the monolithic Cache compare the visit of this mechanism will be slowly many.Can regard ERAT301 as a little Cache, it comprises a part of information of address transition mechanism maintenance and effective address is directly converted to the real address, thereby allows as a rule and needn't change the mechanism to descend promptly in the L1 command cache effective address and real address to be associated by reference address.
When command unit 201 provides the effective address that is requested to instruct to command cache 106 request instructions, the instruction cache must judge rapidly that in fact whether this requested instruction in this Cache, if the words loopback should the instruction, and start the operation that from other places (for example, L2 Cache, primary memory) obtains this instruction if it's not true.In fact under the regular situation in L1 command cache 106, as shown in Figure 3, following actions appears concomitantly in this instruction in this command cache:
(a) be used to certain clauses and subclauses in the effective address of command unit 201 visit ERAT301 to derive effective page number and relevant real page number.
(b) be used to certain clauses and subclauses in the effective address of command unit 201 visit directory array 302 to derive a pair of real page number.
(c) certain clauses and subclauses that is used in the effective address access instruction array 303 of command unit 201 are capable to derive a pair of Cache that contains some instructions.
In superincumbent every kind of situation, in ERAT310, directory array 302 or the instruction array 303 any input and these parts in any other output of irrelevant, thereby the beginning of any all needn't be waited for finishing of other action in the above-mentioned action.The output of ERAT301, catalogue data 302 and director data 303 is then by following processing:
(a) in comparer 304 to comparing from effective page number of ERAT301 with from the identical address hyte of the effective address of command unit 201; If their couplings then exist ERAT " to hit ".
(b) in comparer 305 number comparing from the real page of ERAT number with from each real page of catalogue array 302; If their couplings or existed ERAT to hit then exist command cache " to hit ", that is, requested instruction is in fact in command cache 106, and particularly in instruction array 303.
(c) be used in the real page of ERAT301 and directory array No. 302 relatively output (use and select multiplexer 307) selection instruction array 303, comprising a pair of cache line of required instruction.
Carrying out these actions concomitantly makes when required instruction and postpones during in fact at command cache for minimum.No matter whether required instruction in this command cache, some data to command unit 201 occurs in command cache output.One independently the command cache hiting signal can tell command unit 201 these output datas in fact to comprise required instruction; When not having this command cache hiting signal, command unit 201 neglects this output data.The action that Cache 106 takes that gives an order of cache miss incident can be discussed in the back of this paper.
Fig. 4 is shown specifically ERAT301 and relevant control structure.ERAT301 is one 82 * 128 a array (that is, comprising 128 clauses and subclauses, 82 of each clauses and subclauses).Each ERAT clauses and subclauses comprises an effective address part (position 0-46), a real address part (position 24-51) and several additional bit that illustrates later.
A hash function and two the control row access ERAT301 of position 45-51 by making up effective address (EA), these two control row are: the indication multithreading whether control row (MT) (in the CPU of the preferred embodiment design, might turn off multithreading) and represent the used active line journey of which thread row (ActT) in two threads by used multithreading.(HASH) is as follows for hash function:
HASH 0∶6=(EA 45?AND?﹁MT)OR(ActT?AND?MT)‖EA 46
EA 38?XOR?EA 47‖EA 39?XOR?EA 48‖EA 49∶51
As can be seen, this is one 7 bit function, and this is enough among the regulation ERAT the arbitrary clauses and subclauses in 128 clauses and subclauses.Select logical circuit 401 to select suitable ERAT clauses and subclauses according to top hash function.
The position 0 of the effective address that comparer 304 generates command unit 201 to the position 46 and the effective address of selected ERAT clauses and subclauses partly compare.Because the position 47-51 from the effective address of command unit 201 is used to make up this hash function, can show that the coupling of a 0-46 is enough to keep whole active page face portion of address, the coupling of the 0-51 that ascends the throne.The coupling of these two address portions means the real page number (RA in the ERAT clauses and subclauses 24: 51) be actually the effective address page number (EA with command unit 201 regulation 0: 51) corresponding real page number.Therefore, not quite strictly the effective address of storing in the ERAT clauses and subclauses is partly called effective page number sometimes, although it only comprises the position 0-46 of effective page number in the preferred embodiment.
In some cases, CPU101 can carry out the addressing mode of a kind of special being called effectively=real pattern (E=R).When under this pattern, carrying out, 40 low level (that is EA, of the effective address that command unit 201 generates 24: 63) and real address (RA 24: 63) identical.Typically, this pattern keeps for some low-level operation systemic-function, can more effectively carry out these functions if be stored in the identical unit, real address forever.As shown in Figure 4, when the existing time spent of control row E=R, ERAT301 is in fact by bypass.That is, when the E=R fictitious time select multiplexer 402 from selected ERAT clauses and subclauses RA 24: 51Be chosen to real page number (RPN) output, select EA from command unit 201 and work as E=R true time multiplexer 402 24: 51In addition, when the E=R true time, think irrespectively that with the comparative result in the comparer 304 ERAT is hit.
Because ERAT is the bypass front address transition mechanism that illustrates and describe in Fig. 8 in fact, ERAT duplicates some access control information that comprises in the conventional address transition mechanism.That is, by the out of Memory that comprises in segment table 821, page table 822 or other place, the conversion from the effective address to the real address is the authentication-access right normally.The subclass of this information of ERAT301 high-speed cache is to avoid quoting these address transition mechanisms.Can be from U.S. Patent Application Serial Number 08/966, find the more information of the operation of relevant ERAT in 706, this application applies for that on November 10th, 1997 title is that " effective address is to the Cache management equipment and the method for real address, and it is as the list of references of this paper.
Each ERAT clauses and subclauses comprises several parity bits, safeguard bit and access control bit.Particularly, each ERAT clauses and subclauses comprises a Cache disable bit, a problem state bit and an access control bit.In addition, independently array 403 (1 * 128) comprises and each relevant single significance bit of ERAT clauses and subclauses independently.At last, independently storing a pair of mark mode position in the register 404.Whether the ERAT clauses and subclauses that the significance bit record of array 403 is corresponding are effective; Various conditions can make processor logical circuit (not shown) this significance bit that resets, and cause the visit in succession to corresponding ERAT clauses and subclauses to reload this clauses and subclauses.The Cache disable bit is used to forbid requested instruction is write command cache array 303.That is,, may wish in this address realm of command cache, to avoid the high-speed cache instruction although certain scope of address can contain certain clauses and subclauses among the ERAT.In this case, the each request to instruction in this address realm will make capable fill order logical circuit (back explanation) obtain this requested instruction, but array 303 (catalogue array 302 is not updated yet) not write in this instruction.Problem state bit writes down " problem state " (that is one among keeper or user) that the ERAT clauses and subclauses of packing into are constantly just being carried out thread.The thread that admin state is carried out down has bigger access rights than the thread under the problem state usually.If certain ERAT clauses and subclauses of during a kind of state, packing into, and then change problem state, exist the risk that the thread of current execution was not visited each address in this ERAT range of entries, thereby when this ERAT of visit, must verify this information.Access control bit is also in the moment of the ERAT clauses and subclauses of packing into record access information, and constantly is examined in visit.The mark mode of processor when mark mode hyte 404 record ERAT pack into (mark now with or mark now do not use); Exist the mark mode position that each half (64 clauses and subclauses) with ERAT are associated, utilize 0 of HASH function of ERAT to select it.Mean that the real page in the ERAT clauses and subclauses number may not think reliably because how mark mode influence explains effective address, the change of mark mode.If estimating the mark mode change can often not change yet.Thereby,, then all the clauses and subclauses signs among half ERAT of correspondence are become invalid, and finally reload if detect change.
ERAT logical circuit 405 according to the output of selector switch 304, effectively=some position in real pattern, above-mentioned each and the CPU machine state bit register (not shown), generate some controls and select the use of RPN output of multiplexers 402 and the control signal that ERAT safeguards.Particularly, logical circuit 405 generates and hits ERAT Hit (ER) signal 410, protection exception (PROT-EXC) signal 411, miss (ERAT Miss) signal 412 and Cache and forbid (Cache Inhibit) signal 413.
410 expressions of ERAT Hit signal select the RPN of multichannel recombiner 402 to export the true page number that can be used as with requested effective address correspondence.Do not have protection exception and do not exist when forcing miss some condition of ERAT when effective=real (effectively=real, bypass ERAT) or when comparer 304 detects coupling, this signal is effective.Be expressed as it in logic so long:
ERAT_Hit=(E=R)?OR?(Match_304?AND?Valid?AND?﹁Protection_Exc
AND?﹁Force_Miss)
Wherein Match-304 is from the indication of comparer 304 EA from command unit 201 0: 46With the EA in the ERAT clauses and subclauses 0: 46The signal of coupling, and Valid is the value from the significance bit of array 403.
411 expressions of protection exception signal, although the ERAT clauses and subclauses contain valid data, the processing of current execution mustn't be visited required instruction.The requested ERAT clauses and subclauses of ERAT Miss signal 412 expression do not contain required real page number, and perhaps these clauses and subclauses can not be considered to reliable; Under these two kinds of situations, these ERAT clauses and subclauses must be reloaded into.Cache inhibit signal 413 prevents that requested instruction quilt is at instruction array 303 high speed buffer memorys.In logic by these signals of following derivation:
Force_Miss=(MSR(Rr)≠ERAT(Pr))?OR?(MSR(TA)≠Tag_404)
Protection?Exc=﹁E=R?AND?﹁Force_Miss?AND?Match_304?AND?Valid
AND?ERAT(AC)?AND?(MSR(Us)?OR?﹁MSR(TA))
ERA_Miss=﹁E=R?AND(﹁Match_304?OR?﹁Valid?OR?Force_Miss)
Cache_Inhibit=﹁ E=R AND ERAT (CI) wherein
ERAT (Pr) is the problem state bit from the ERAT clauses and subclauses;
ERAT (AC) is the access control bit from the ERAT clauses and subclauses;
ERAT (CI) is the Cache disable bit from the ERAT clauses and subclauses;
MSR (TA) now uses the position from the mark of machine status register(MSR);
MSR (Us) is the User Status position from machine status register(MSR); And
Tag-404 is the selection marquee position from register.
Fig. 5 illustrates in greater detail command cache directory array 302 and relevant control structure.The command cache directory array comprises that 66 * 512 arrays 502 that are used to keep real page number and some control bits and one are used to store up-to-date 1 * 512 additional arrays 503 that use the position.Array 502 and 503 physically separates, though logically be treated as single array to them so long.Array 502 is divided into two groups in logic, and preceding 33 of each array entries belong to first group (0), and back 33 of each clauses and subclauses belong to second group (1).Each clauses and subclauses in the array 502 comprise that 28 real pages number (that is position, real address 24-51), four of one and group 0 correspondence are used to organize 0 significance bit, one and are used to organize 0 parity check bit, one and are used to organize 28 real pages of 1 number, four and are used to organize 1 significance bit and one and are used to organize 1 parity check bit.
Fig. 6 at length illustrates command cache array 303 and relevant control structure.Command cache array 303 is made up of the array of 64 bytes * 2048, and it is similar to directory array 502 logically can be divided into two groups, and preceding 32 bytes of each array entries belong to group 0, and back 32 bytes belong to group 1.Instruction array 303 comprise 8 with group 0 corresponding processor executable instruction (respectively being 4 bytes) and 8 with organize 1 corresponding processor executable instruction (respectively being 4 bytes).
The contiguous set of 4 clauses and subclauses in each clauses and subclauses in the directory array 502 and the instruction array 303 is relevant.Single group (it is capable that group 0 or the contiguous set of organizing these 4 clauses and subclauses that comprise in 1 are called a Cache, and the single clauses and subclauses that comprised in each group are called Cache row.Although select logical circuit 601 can visit each clauses and subclauses (that is, each is from a pair of Cache row of one of group 0 and group 1) independently, only there are one and each Cache is capable or four son row groups are corresponding real page number in the directory array 502.Therefore, as more completely explaining, in the capable padding of single Cache, insert four capable Caches row of Cache of formation herein by one group.
In the preferred embodiment, a Cache in the instruction array 303 is capable to comprise 128 bytes, thereby needs 7 address bits (address bit 57-63) certain byte with the capable space of designates cache device.A son row in four Cache row in address bit 57 and 58 regulation Caches are capable.With the capable real address of hyte 24-56 designates cache device, real address.Effective address hyte 48-56 (corresponding to the capable low order address hyte of Cache) is used for selecting clauses and subclauses of array 502 and 503.Selecting logic 501 is the direct decoding of these address bits.It is actually a simple hash function,, has 2 of significant address bit group 48-56 that is 9Individual may the combination, but 2 33(corresponding to real address hyte 24-56) is transformed in this array in the possible real address that individual Cache is capable, similarly, utilize clauses and subclauses in effective address hyte 48-58 (corresponding to the low order address hyte of Cache row) the selection instruction array 303, selecting logic 601 is the direct decoding of these address hytes.The real address of the Cache row in the instruction array 303 is the real page number (RA of corresponding clauses and subclauses 24: 51) and in directory array 502, be arranged to (the EA with effective address hyte 52-58 52: 58) connect.
Owing in each clauses and subclauses, having two real pages number (from group 0 and group 1), two real pages number (and two Caches in the instruction array 303 are capable) of each 9 bit pattern correspondence of two of existence and effective address hyte 48-56 in the command cache catalogue.The feasible command cache contention that might avoid between the thread of this feature.
Owing to select logic 501 to serve as a sparse hash function, can not guarantee in the array 502 two real pages being comprised in the clauses and subclauses number complete effective address page number corresponding to required instruction.In order to verify correspondence, utilize comparer 305 and 306 simultaneously the real page number output 411 of selecting two real pages number and ERAT301 to be compared.When this relatively, utilize in the clauses and subclauses selection group 0 of effective address hyte 57-58 this selections from array 502 a suitable significance bit and a suitable significance bit of organizing in 1 in four significance bits in four significance bits.These selected significance bits are corresponding to the Cache row of required instruction.They carry out AND operation with the output of each corresponding comparer 305,306, and are right with the signal that generates a pair of expression and each group coupling.The logical "or" of these signals and ERAT hiting signal 410 carry out AND operation to generate command cache hiting signal 510, and it represents that required instruction is really in the L1 command cache.
As explained, select logic 601 to utilize clauses and subclauses (a pair of " son row ") in the effective address access instruction array 303 of the required instruction that command unit provides.Selector switch 602 selects the child in the group 0 of arrays 303 capable or select a bypass row value from Cache write bus 604.When certain Cache of filling is capable after cache miss, use this bypass row value; In this case, in case new Cache row value can obtain then be presented on Cache write bus 604 immediately, needn't write instruction array 303 earlier from external source.Can save a small amount of time by such bypass instruction array during Cache padding.When forbidding that row 413 effectively, also uses Cache this bypass.
The value that depends on group selection row 511, the child of the output of selector switch 603 selection selector switchs 602 or the group 1 of selection array 303 is capable.If there is cache hit in half group 1 of Cache, group selection row 511 is a high level.Promptly, when comparer 306 detects from the real page of ERAT numbers 411 and in from catalogue array 502 during the coupling between group 1 real page number of selected entry, and the corresponding son row significance bit that selector switch 505 is selected is effective, then group selection row 511 will be high level, make selector switch 603 select the word of group 1 of arrays 303 capable.Under all other situations (comprising cache miss), select the output of selector switch 602.The output of selector switch 603 is the data from 32 bytes of the connected storage unit of 8 instructions of expression.It is submitted to command unit 201, is used for write sequence impact damper 203, route impact damper 204 or transition buffer 205 one.Exist under the incident of cache miss, command cache hits row 500 and is low level, and ignores the output (that is, it is not written in the impact damper in the command unit 201) of selector switch 603.If the MRU position in the array 503 of the catalogue entry correspondence that has cache hit (row 510 is effective) and select is upgraded with the value of group selection row 511.
The explanation instruction of the being searched situation in command cache in fact above.When existing command cache miss, there are two kinds of possibilities: (a) existed ERAT to hit, when this instructs not in instruction array; Or (b) exist ERAT miss.Under the situation that exists ERAT to hit, it is capable to fill required Cache fasterly.Because real page number in ERAT, is known desired data (with may be in the L2 Cache) in primary memory.Logical circuit in the L1 command cache 106 may be in the complete real address that needn't visit under the situation that external address changes the mechanism from the required instruction of ERAT data construct, and directly gets this data from L2 Cache or primary memory.Exist under the miss situation of ERAT, in order to visit the external address reference address in the real address that makes up required instruction, and number upgrading ERAT with new real page on demand.Desired data might be in primary memory in this case, and must read in from the auxiliary storage such as disk drive.Although it is miss ERAT still to occur when in theory may in fact required instruction being in instruction array 303, this in fact seldom occurs.Thereby in case exist ERAT miss, the row of the array of enabled instruction is simultaneously filled.
Fig. 7 illustrates the main tucker of row fast logical circuit, that is, exist ERAT to hit but generate the control logic circuit of the capable filling of Cache under the cache miss incident.Fast row is filled the sequencer logic circuit and is comprised that row fills enable logic circuit 701 and a pair of register 710,711 (indicating into LFAddrO and LFAddrl), and the row that their storages are finished before the capable padding is filled required parameter.
Each LFAddr register 710,711 is respectively corresponding to one in two routes, that is, LFAddrO 710 is corresponding to thread 0, and LFAddrl is corresponding to thread 1.If command unit 201 is made the request to certain instruction in execution path 0, in LFAddrO register 710, store required parameter, similarly, the request when in LFAddrl register 711, storing execution path 1.(, only use LFAddrO register 710 turning off under the incident of multithreading.) each LFAddr register 710,711 can only store single row and fill request.Thereby, existing ERAT to hit at certain given thread and exist co-pendingly when not finishing row and filling request with the miss but same thread of command cache, second request must be delayed.
Each LFAddr register comprises significant address bit 48-58 (EA 48: 58), position, real address 24-51 (RA 24: 51), a group position and a request do not finish (" R ") position.The address hyte both had been used for the capable storer real address of Cache that will fill, also was used for writing when the loopback Cache is capable directory array 502 and instruction array 303.Group position determines to write which group (group 0 or organize 1) of directory array 502 and instruction array 303.When the LFAddr register is put in the request of not finishing " R " position not being finished in request is 1, and when finishing reset when row is filled request (not shown reseting logic circuit).
Row is filled enable logic circuit and is received ERAT as input and hit line 410, command cache center line 510, stipulate the effective active line process control of which thread line (ActT) and from the request of LFAddrO register 710 and LFAddrl register 711 completion bit (being marked as " RO " and " R1 " respectively) not.In case exist ERAT to hit, command cache is miss and with the LEAddr register of current active line journey correspondence in when not existing current row co-pending to fill request, start row and fill request (action line is filled request row 703).If it is miss to exist ERAT to hit with command cache, but with the LFAddr register of current active line journey correspondence in exist unsettled row to fill request, finishing this unsettled preceding command cache wait of row filling request (" R " position is resetted), just start new row filling request then.Logical relation between these input and output can be by following expression:
LFReq=ERATHit?AND?﹁ICacheHit?AND
[(﹁ActT?AND?﹁R0)?OR?(ActT?AND?﹁R1)]
When starting row filling request, row is filled enable logic circuit and is generated write signal 704,705, so that required parameter is written in one of LFAddr register 710,711.It can be effective having only one at any time in the write signal 704,705 always.If become what imitate one of in the write signal 704,705, now storing EA in the LFAddr register with the route correspondence with current 48: 58(from L1 command cache address bus 231), RA 24: 51(path 411 is from ERAT301) and from one of logic 720 group position of group.Simultaneously, the request in this register not completion bit be configured to 1.The logical derivation of write signal is as follows:
Write0=ERAT_Hit?AND?﹁ICacheHit?AND?﹁ActT?AND?﹁R0
Write1=ERAT_Hit?AND?﹁ICacheHit?AND?ActT?AND?﹁R1
Because directory array 502 and instruction array 303 are divided into two groups (group 0 and group), and each organizes with identical hash function index, in logic can be capable to the Cache that two groups are write the request of filling voluntarily.Organize the write cache device capable to which is when making trip by group logical circuit 720 and filling request in group position decision and that store suitable LFAddr register into.Usually, chosen group is capable least recently used group of the Cache that will fill, promptly with by the opposite group in MRO position of the clauses and subclauses correspondence in the directory array 502 of hash function index.But, do not finish row and fill request in non-existing the existence, and this uncompleted capable filling will fill under the same Cache market condition with route, then Xuan Ding group is and be non-ly now to go filling with not finishing of route and ask the opposite group of group of selection.Fill request group constantly by so definite startup row, can avoid possible livelock situation (that is, two uncompleted row requests of filling attempt to write same group).
The use of canned data in the register shown in Fig. 7 710.Be concise description, dispense similar data routing among the figure from register 711.Derive the address of containing the Cache row that is requested to instruct in some address informations of from spendable LFAddr register, storing.Particularly, real page number (RA 24: 51) and hyte EA 52: 58Connection is to obtain the real address of Cache row.This represents with numeral 712 in Fig. 7.It needs not to be an independently register, and the suitable hyte assembling address from a LFAddr just only is shown.Row is filled the request of data that request line 703 starts Memory Management Unit 222, fills the address that sends on the bus 233 with 712 representatives at Cache.Also send a route sign position, so that L1 command cache steering logic can be relevant with which LFAddr register in the instruction of definite loopback after a while.Then Memory Management Unit judges whether obtain requested instruction from L2 Cache 108, primary memory 102 or other source.When Memory Management Unit 222 can obtain requested instruction, on bus 233, this instruction and route sign position are sent to the L1 command cache together.
The loopback that is requested to instruct on the bus 233 will produce the control signal that data is write directory array 502 and instruction array 303.Particularly, be used to EA from suitable LFAddr register 710,711 48: 56Select clauses and subclauses in the array 502.The group position of LFAddr register is used for generating half a write signal to array 502 on one of write signal line 706,707 together with control signal, the state of this group position is determined (that is, in the write signal line 706 or 707 which be effective) which of array 502 partly write.The real page of LFAddr register number (RA 24: 51) be written in the array 502 by the EA that uses in half definite array of group position 48: 51In the clauses and subclauses of selecting.Upgrade the MRU position of directory array simultaneously.
In aforesaid operations, utilize the EA of LFAddr register 48: 56Clauses and subclauses in the selection instruction array 303, and utilize the group position of LFAddr register to generate to be used for half write signal of this array similarly.Write data on this unit and be the data (a string instruction) from bus 233, it presents on the LF data bus 604 shown in Figure 6.Yet, under the situation of filling instruction array 303, once can only write a son row.LF data bus 604 once presents a strip capable (32 byte).Utilize the EA of LFAddr register 48: 56And two extra address positions 57 and 58 of providing of sequential logic circuits (not shown), by selecting logical circuit 601 to select whole son row.Thereby fill 4 write cycle times of the capable needs of whole Cache.
When the real page of the instruction array clauses and subclauses of upgrading number was write directory array, it is invalid that four significance bits (one of each sub-row) initially are arranged to.In that in succession child is capable when being written to instruction array 303 at every turn, the corresponding significance bit in the directory array 502 is updated to reflect that these data are effective now.If will interrupt in the above-mentioned continuous write cycle time capable the writing of Cache for any reason, directory array 502 will contain correct information.
Under the ERAT miss event, No. 402 output of the real page of selector switch is insecure.Before doing anything, must partly convert page number to real page number from the effective address of command unit 201.ERAT-Miss line 412 will trigger the address transition mechanism of logically describing among Fig. 8.The actual hardware that carries out this conversion is not the part of instruction Cache 106; This hardware part can be included among the CPU201, and other hardware can be in primary memory 102 or other places.Compare with capable padding described above, this address translation typically needs many relatively periodicities.When the real page changed in the loopback of the miss back of ERAT, this real page number is used to upgrade ERAT310 simultaneously and is written into suitable LFAddr register (710 or 711) to start capable padding.Though although in theory in this case might this requested instruction meeting at this in the miss Cache of ERAT, in fact this be rarely found pass through request row padding immediately rather than etc. ERAT clauses and subclauses to be filled improve the incident of performance.
Be appreciated that for concisely from figure neutralization explanation, having omitted for understanding the dispensable logical circuit of the present invention.For example, dispense the logical circuit of the MRU position that is used for keeping array 502 and be used for the detection parity mistake and take suitably to revise the logical circuit of behavior.
In the preferred embodiment, utilize ERAT that the part (real page number) of real address is provided, so that the real page in it and the directory array number is compared in order to verify in the Cache.Because ERAT provides and response time of main address transition mechanism is irrelevant to the quick conversion of real page number, this design is preferred.Because do not require reference address under the situation of the rapidity that main address transition mechanism response time monocycle in supporting command cache is required, this system planner frees from some restrictions.But, in the embodiment that substitutes, might not have the command cache that structure illustrates under the ERAT herein.In this case, can use main address transition mechanism to provide to be used for and directory array in the real page number of real page comparison.In other alternate embodiment, might utilize L1 command cache inside or certain outside other mechanism real page number is provided.
In the preferred embodiment, the Cache correlated measure is identical with number of threads.The thread contention to public Cache is avoided in this help.But, might design the Cache of explanation herein with substituting, the quantity of thread is different with the Cache conjugation therein.For example, if the number of threads of processor support is big, may not need the Cache conjugation so much with number of threads for avoiding contention.In this case, although at conjugation less than having contention in theory under the number of threads, littler conjugation occasionally is receivable.Even tolerable is 1 Cache conjugation, although might there be some contention in this.
Although by thinking that at present each the most practical preference has illustrated the present invention, be appreciated that the present invention is not subject to the disclosed embodiment, on the contrary, each embodiment is used for the essence and the interior included various modifications and the equivalent of scope of topped appended claims book.

Claims (12)

1. multithreaded computer treatment facility comprises:
Many groups are used to support the register of a plurality of execution threads, the relevant thread of every group of register correspondence in described a plurality of threads;
A command unit, described command unit comprise the sequential logic circuits that is used for the decoding logic circuit of instruction decode and is used to generate the effective address of the instruction that will carry out; And
A command cache, described command cache provide the instruction in response to the required effective address that is generated by described command unit, and described command cache comprises:
(a) directory array with a plurality of clauses and subclauses, each clauses and subclauses comprises the part of the real address of instruction, wherein utilizes described required effective address to select clauses and subclauses of described directory array;
(b) instruction array with a plurality of clauses and subclauses, each instruction array clauses and subclauses is relevant with clauses and subclauses in the described directory array and comprise at least one instruction, wherein utilizes described required effective address to select clauses and subclauses in the described directory array; And
(c) a plurality of capable buried registers, each described capable buried register is corresponding to a relevant thread in described a plurality of threads, and each row buried register memory response is miss in command cache and the relevant real address of at least a portion of certain the required instruction that will retrieve.
2. the multithreaded computer treatment facility of claim 1, wherein said command cache also comprises:
(d) effective address with a plurality of clauses and subclauses is to the real address conversion array, each clauses and subclauses comprises the part of effective address and the part of real address, wherein utilizes described required effective address to select described effective address to clauses and subclauses in the conversion array of real address;
Wherein clauses and subclauses the conversion array of real address are obtained the described part of the relevant real address of the required instruction of storing in the described capable buried register from described effective address.
3. the multithreaded computer treatment facility of claim 2, wherein said command cache also comprises:
(e) comparer is used for hitting to the real address conversion array to judge effective address comparing to the described part of the effective address of real address conversion array clauses and subclauses and the counterpart of described required effective address from described effective address.
4. the multithreaded computer treatment facility of claim 1, wherein:
Described directory array is divided into N group, N>1 wherein, and each described directory array clauses and subclauses contain the relative section of the real address of a plurality of instructions, and each real address part belongs to a relevant group in the described N group of described directory array; And
Described instruction array is divided into N group, and each group of described instruction array is corresponding to a relevant group of described directory array, and each described instruction array clauses and subclauses comprises many instructions, and every instruction belongs to a relevant group in the described N group of described instruction array.
5. the multithreaded computer treatment facility of claim 4, wherein said multithreaded computer treatment facility is supported the execution of N thread.
6. the multithreaded computer treatment facility of claim 4, wherein each described capable buried register comprises a group field, described group field regulation is in case will in which group of the required instruction storage that is retrieved in described N group after the retrieval.
7. the multithreaded computer treatment facility of claim 4, described command cache also comprises:
(e) N comparer, each comparer and described directory array one relevant group is relevant, each comparer is used for the common part from the described relative section of the real address of certain instruction of the relevant portion of the selected clauses and subclauses of described directory array and the real address relevant with described required effective address is compared, with the judgement cache hit.
8. the multithreaded computer treatment facility of claim 4, described command cache also comprises:
(d) effective address with a plurality of clauses and subclauses is to the real address conversion array, each clauses and subclauses comprises the part of effective address and the part of real address, wherein utilizes described required effective address to select described effective address to clauses and subclauses in the conversion array of real address;
Wherein clauses and subclauses the conversion array of real address are obtained the described part of the relevant real address of the required instruction of storing in the described capable buried register from described effective address.
9. the multithreaded computer treatment facility of claim 8, described command cache also comprises:
(e) N comparer, each comparer and described directory array one relevant group is relevant, each comparer is used for the common part from the described relative section of the real address of certain instruction of the relevant portion of described directory array selected entry and the real address relevant with described required effective address is compared, to judge cache hit, be that certain clauses and subclauses from described effective address to the real address conversion array obtain wherein by the described common part with the real address relevant of judging cache hit of described comparer comparison with described required effective address.
10. multithreaded computer treatment facility comprises:
Many group registers, every group of register correspondence is in a relevant thread;
A command unit, described command unit comprise the sequential logic circuits that is used for the decoding logic circuit of instruction decode and is used to generate the effective address of the instruction that will carry out; And
A command cache, described command cache provide the instruction in response to the required effective address that is generated by described command unit, and described command cache comprises:
Directory array with a plurality of clauses and subclauses, described directory array is divided into the N group, N>1 wherein, each described directory array comprises N part, an one relevant group relative section of being correlated with and containing the real address of certain instruction in each clauses and subclauses part and the described N group wherein utilizes described required effective address to select clauses and subclauses in the described directory array;
Instruction array with a plurality of clauses and subclauses, each instruction array clauses and subclauses is correlated with relevant clauses and subclauses in the described directory array and is comprised many instructions, described instruction array is divided into the N group, each group of described instruction array is corresponding to one in the described directory array relevant group, each described instruction array clauses and subclauses comprises N part, in the described N group of each clauses and subclauses part and described instruction array one relevant group is relevant, each clauses and subclauses of described instruction array partly comprise at least one instruction, wherein utilize described required effective address to select clauses and subclauses in the described directory array; And
N comparer, each comparer and described directory array one relevant group relevant, and each comparer is used for the common part from the described relative section of the real address of certain instruction of the relevant portion in the selected entry of described directory array and the real address relevant with described required effective address is compared with the judgement cache hit.
11. the multithreaded computer treatment facility of claim 10, wherein said multithreaded computer treatment facility is supported the execution of N thread.
12. the multithreaded computer treatment facility of claim 10, wherein said command cache also comprises:
(d) effective address with a plurality of clauses and subclauses is to the real address conversion array, each clauses and subclauses comprises the part of effective address and the appropriate section of real address, wherein utilizes described required effective address to select described effective address to clauses and subclauses in the conversion array of real address;
Be that certain clauses and subclauses from described effective address to the real address conversion array obtain wherein by the described common part with the real address relevant of judging cache hit of described comparer comparison with described required effective address.
CNB001016954A 1999-03-10 2000-01-27 Command cache for multiple thread processor Expired - Fee Related CN1168025C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/266,133 1999-03-10
US09/266,133 US6161166A (en) 1997-11-10 1999-03-10 Instruction cache for multithreaded processor

Publications (2)

Publication Number Publication Date
CN1267024A true CN1267024A (en) 2000-09-20
CN1168025C CN1168025C (en) 2004-09-22

Family

ID=23013309

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB001016954A Expired - Fee Related CN1168025C (en) 1999-03-10 2000-01-27 Command cache for multiple thread processor

Country Status (2)

Country Link
JP (1) JP3431878B2 (en)
CN (1) CN1168025C (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1307561C (en) * 2003-12-09 2007-03-28 国际商业机器公司 Multi-level cache having overlapping congruence groups of associativity sets in different cache levels
CN1317645C (en) * 2002-06-04 2007-05-23 杉桥技术公司 Method and apparatus for multithreaded cache with cache eviction based on thread identifier
CN1317644C (en) * 2002-06-04 2007-05-23 杉桥技术公司 Method and apparatus for multithreaded cache with simplified implementation of cache replacement policy
CN100345124C (en) * 2003-09-25 2007-10-24 国际商业机器公司 Method and system for reduction of cache miss rates using shared private caches
CN100426260C (en) * 2005-12-23 2008-10-15 中国科学院计算技术研究所 Fetching method and system for multiple line distance processor using path predicting technology
CN102057359A (en) * 2009-04-10 2011-05-11 松下电器产业株式会社 Cache memory device, cache memory control method, program, and integrated circuit

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030188141A1 (en) * 2002-03-29 2003-10-02 Shailender Chaudhry Time-multiplexed speculative multi-threading to support single-threaded applications
JP2002342163A (en) * 2001-05-15 2002-11-29 Fujitsu Ltd Method for controlling cache for multithread processor
JP3900025B2 (en) 2002-06-24 2007-04-04 日本電気株式会社 Hit determination control method for shared cache memory and hit determination control method for shared cache memory
US7805588B2 (en) * 2005-10-20 2010-09-28 Qualcomm Incorporated Caching memory attribute indicators with cached memory data field
JP5333433B2 (en) 2008-02-26 2013-11-06 日本電気株式会社 Processor, method and program for executing multiple instruction streams at low cost
JP2013050745A (en) * 2009-11-26 2013-03-14 Nec Corp Exclusive control device, method, and program

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5475938A (en) * 1977-11-30 1979-06-18 Fujitsu Ltd Data processor of multiplex artificial memory system
JPS5677965A (en) * 1979-11-26 1981-06-26 Fujitsu Ltd Buffer memory control system
US4332010A (en) * 1980-03-17 1982-05-25 International Business Machines Corporation Cache synonym detection and handling mechanism
JPS58182180A (en) * 1982-04-16 1983-10-25 Hitachi Ltd Buffer storage device
JPS5975483A (en) * 1982-10-22 1984-04-28 Fujitsu Ltd Buffer storage control system
JPH06100987B2 (en) * 1987-04-10 1994-12-12 日本電信電話株式会社 Address translation control method
JPS63284648A (en) * 1987-05-18 1988-11-21 Fujitsu Ltd Cache memory control system
DE68909426T2 (en) * 1988-01-15 1994-01-27 Quantel Ltd Data processing and transmission.
JPH0320847A (en) * 1989-06-19 1991-01-29 Fujitsu Ltd Cache memory control system
JPH03216744A (en) * 1990-01-22 1991-09-24 Fujitsu Ltd Built-in cache memory control system
JPH03235143A (en) * 1990-02-13 1991-10-21 Sanyo Electric Co Ltd Cache memory controller
CA2094269A1 (en) * 1990-10-19 1992-04-20 Steven M. Oberlin Scalable parallel vector computer system
JPH04205636A (en) * 1990-11-30 1992-07-27 Matsushita Electric Ind Co Ltd High speed address translation device
JP3100807B2 (en) * 1992-09-24 2000-10-23 松下電器産業株式会社 Cache memory device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1317645C (en) * 2002-06-04 2007-05-23 杉桥技术公司 Method and apparatus for multithreaded cache with cache eviction based on thread identifier
CN1317644C (en) * 2002-06-04 2007-05-23 杉桥技术公司 Method and apparatus for multithreaded cache with simplified implementation of cache replacement policy
CN100345124C (en) * 2003-09-25 2007-10-24 国际商业机器公司 Method and system for reduction of cache miss rates using shared private caches
CN1307561C (en) * 2003-12-09 2007-03-28 国际商业机器公司 Multi-level cache having overlapping congruence groups of associativity sets in different cache levels
CN100426260C (en) * 2005-12-23 2008-10-15 中国科学院计算技术研究所 Fetching method and system for multiple line distance processor using path predicting technology
CN102057359A (en) * 2009-04-10 2011-05-11 松下电器产业株式会社 Cache memory device, cache memory control method, program, and integrated circuit

Also Published As

Publication number Publication date
CN1168025C (en) 2004-09-22
JP2000259498A (en) 2000-09-22
JP3431878B2 (en) 2003-07-28

Similar Documents

Publication Publication Date Title
CN1306420C (en) Apparatus and method for pre-fetching data to cached memory using persistent historical page table data
CN1242330C (en) Fly-by XOR method and system
US6161166A (en) Instruction cache for multithreaded processor
CN102473138B (en) There is the extension main memory hierarchy of the flash memory processed for page fault
CN1287292C (en) Using type bits to track storage of ecc and predecode bits in a level two cache
CN1248118C (en) Method and system for making buffer-store line in cache fail using guss means
US8806112B2 (en) Meta data handling within a flash media controller
CN1296827C (en) Method and equipment for reducing execution time in set associative cache memory with group prediction
US7447868B2 (en) Using vector processors to accelerate cache lookups
US7219185B2 (en) Apparatus and method for selecting instructions for execution based on bank prediction of a multi-bank cache
US11693775B2 (en) Adaptive cache
CN1307561C (en) Multi-level cache having overlapping congruence groups of associativity sets in different cache levels
CN1168025C (en) Command cache for multiple thread processor
US10019381B2 (en) Cache control to reduce transaction roll back
JPH07104816B2 (en) Method for operating computer system and memory management device in computer system
CN1675626A (en) Instruction cache way prediction for jump targets
JP2000003308A (en) Overlapped memory access method and device to l1 and l2
KR19990077432A (en) High performance cache directory addressing scheme for variable cache sizes utilizing associativity
US6571316B1 (en) Cache memory array for multiple address spaces
CN1659527A (en) Method and system to retrieve information from a storage device
CN1397887A (en) Virtual set high speed buffer storage for reorientation of stored data
US5659699A (en) Method and system for managing cache memory utilizing multiple hash functions
Wen et al. Hardware memory management for future mobile hybrid memory systems
US20230061668A1 (en) Cache Memory Addressing
KR100618248B1 (en) Supporting multiple outstanding requests to multiple targets in a pipelined memory system

Legal Events

Date Code Title Description
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C06 Publication
PB01 Publication
C14 Grant of patent or utility model
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1056100

Country of ref document: HK

ASS Succession or assignment of patent right

Owner name: INTEL CORP .

Free format text: FORMER OWNER: INTERNATIONAL BUSINESS MACHINES CORPORATION

Effective date: 20131025

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20131025

Address after: American California

Patentee after: Intel Corporation

Address before: American New York

Patentee before: International Business Machines Corp.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20040922

Termination date: 20170127