CN100461135C - Method and device for changing high speed slow storage data sector - Google Patents
Method and device for changing high speed slow storage data sector Download PDFInfo
- Publication number
- CN100461135C CN100461135C CNB200410001593XA CN200410001593A CN100461135C CN 100461135 C CN100461135 C CN 100461135C CN B200410001593X A CNB200410001593X A CN B200410001593XA CN 200410001593 A CN200410001593 A CN 200410001593A CN 100461135 C CN100461135 C CN 100461135C
- Authority
- CN
- China
- Prior art keywords
- line taking
- fast line
- section
- instruction
- microprocessor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
- 238000000034 method Methods 0.000 title claims description 115
- 230000015654 memory Effects 0.000 claims abstract description 106
- 238000013519 translation Methods 0.000 claims abstract description 17
- 239000000872 buffer Substances 0.000 claims description 76
- 238000012546 transfer Methods 0.000 claims description 30
- 230000010076 replication Effects 0.000 claims description 24
- 238000000605 extraction Methods 0.000 claims description 10
- 238000012986 modification Methods 0.000 claims description 10
- 230000004048 modification Effects 0.000 claims description 10
- 239000000284 extract Substances 0.000 claims description 5
- 230000003213 activating effect Effects 0.000 claims 1
- 230000008569 process Effects 0.000 description 84
- 230000004044 response Effects 0.000 description 24
- 230000000712 assembly Effects 0.000 description 10
- 238000000429 assembly Methods 0.000 description 10
- 230000008859 change Effects 0.000 description 10
- 238000007689 inspection Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000013277 forecasting method Methods 0.000 description 4
- 230000015572 biosynthetic process Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 239000012467 final product Substances 0.000 description 3
- 238000005755 formation reaction Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 230000001149 cognitive effect Effects 0.000 description 2
- 230000002950 deficient Effects 0.000 description 2
- 230000003111 delayed effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000008030 elimination Effects 0.000 description 2
- 238000003379 elimination reaction Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000004377 microelectronic Methods 0.000 description 1
- 230000008450 motivation Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 238000009418 renovation Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/30003—Arrangements for executing specific machine instructions
- G06F9/30007—Arrangements for executing specific machine instructions to perform operations on data operands
- G06F9/30032—Movement instructions, e.g. MOVE, SHIFT, ROTATE, SHUFFLE
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0862—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0877—Cache access modes
- G06F12/0879—Burst mode
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/30003—Arrangements for executing specific machine instructions
- G06F9/3004—Arrangements for executing specific machine instructions to perform operations on memory
- G06F9/30047—Prefetch instructions; cache control instructions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3818—Decoding for concurrent execution
- G06F9/3822—Parallel decoding, e.g. parallel decode units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3824—Operand accessing
- G06F9/383—Operand prefetching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/60—Details of cache memory
- G06F2212/6028—Prefetching based on hints or prefetch instructions
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Advance Control (AREA)
Abstract
A microprocessor apparatus is provided that enables exclusive allocation and renaming a cache line. The apparatus includes translation logic and execution logic. The translation logic translates an allocate and rename instruction into a micro instruction sequence that directs a microprocessor to allocate a first cache line in an exclusive state and to copy the contents of a second cache line into the first cache line. The execution logic is coupled to the translation logic. The execution logic receives the micro instruction sequence, and issues transactions over a memory bus that requests the first cache line in the exclusive state. Upon granting of exclusive rights, the execution logic copies the contents of the second cache line into the first cache line.
Description
Technical field
The present invention relates to microelectronic field, especially refer to a kind of device and method make the programmer be able to command processor within it portion's high-speed cache (cache) go up the computing of renaming (rename) of carrying out fast line taking (cacheline) shelves of a section of change (block) name, promptly relate to the method and apparatus of change cached data section.
Background technology
The application is relevant with the U.S. Patent application in the following application that coexists, and has identical applicant and inventor, U. S. application number: 10/364911, and the applying date: 2/11/2003, title: the prefetch mechanisms that intention stores; U. S. application number: 10/364919, the applying date: 2/11/2003, title: the prefetch mechanisms that the intention of section internal memory stores.
At microprocessor now, the data rate between its internal logic section (logic block) far surpasses the access speed in itself and external memory.For instance, in an x86 desktop computer configuration, interface operation rate between its bus (bus) and the Installed System Memory is hundred times in megahertz (megahertz), but its internal microprocessor clock pulse speed is but near tens of times ten megahertzes (gigahertz).Therefore, developed the strata system that a cache structure in recent years, this system makes that high performance microprocessor needn't be when reading (read) or write (write) data at every turn, be unlikely because must and stagnate (stall), and more can bring into play its usefulness rambus (memory bus) last execution operation (transaction) slowly.
One airborne (on-board) or zone (local), being cached in a pipelineization (pipeline) microprocessor is an independently unit, in itself, its function mode is for be transparent (transparent) for the instruction of flowing in the pipeline microprocessor, this mode guarantees that the needed data of instruction in the application program (applicationprogram) have resided in its high-speed cache, and can use the access of pipeline speed, rather than with rambus speed.Different technology is used different cache memory architectures, some is made up of multilayer (multiple levels) high-speed cache: the ground floor high-speed cache is very near the execution logic circuit (executionlogic) of processor, second layer high-speed cache can be on the chip on (on-chip) or the non-chip, be the data that are used for storing access more seldom, the 3rd floor height speed buffer memory then can be on memory card (memory card), by that analogy.No matter use that a kind of framework, those of ordinary skill in the art will find to use the purpose of high-speed cache to be to get rid of the stagnation (stalled) of instructing in the microprocessor pipeline of bus operation when rambus sends slowly via one, and the required data of computing are read or write to this bus operation in order to obtain one (pending) co-pending.When this situation generation, program implementation is made us insufferable time-out (halt) with generation, till obtaining desired data.
The phenomenon in shared drive zone makes situation more complicated between computer system component (device) now.For example, communicating by letter between master microprocessor (primary microprocessor) and the communication microprocessor (communications microprocessor) is via read and write data on a specified memory zone.Video signal assembly (video device) go up video data give operator's (operator) microprocessor and the master microprocessor of video signal card (video card) more shared be referred to as the region of memory of video signal impact damper (videobuffers), also be very common situation.
In the shared drive system, may take place to be present in simultaneously in zone (local) high-speed cache of two different microprocessors, or be present on the different assemblies that are attached to same rambus from the data of a shared region (region).If all component is simple reading of data, then allow them that data are resided in its regional cache structure, can't cause any harm.But when they all are allowed to change (modify) and are present in the data of its regional high-speed cache, promptly can cause the consequence that to expect.
For anti-situation here takes place, system designer has been developed cache coherent protocol to indicate the state of data in the high-speed cache.MESI is the agreement of the most generally using.Come the maintenance area high-speed cache can guarantee that two copies of same data can not changed at one time according to MESI.The MESI shared state informs whether the data of regional high-speed cache one particular section are shared (shared).Share in this way, then area controller is carried out operation on via a slower rambus with the exclusive permission (exclusive permission) that obtains data segments before, must not change this data segments.If desire is changed data, processor need be obtained the exclusive entitlement of data earlier.
The present invention is caused when this problem of emphasizing relates to desire with the data write memory occurs in delay in the application program.Those of ordinary skill in the art will find that high-speed cache there is no reasonable method to learn when a particular memory region can be required at first, therefore when this particular memory region is loaded into regional high-speed cache for the first time, always can cause that rambus postpones.Cognitive this fact, the deviser developed can on microprocessor, carry out one in build prefetched instruction in the application program in.But this prefetched instruction can not operate on the operand of program circuit.Definite says, the usefulness of this high-speed cache in order to future is counted in the load operation from the internal memory of prefetched instruction command area high-speed cache.And because the interaction mode between cache element and internal memory, for the instruction flow in the microprocessor pipeline, be transparent, therefore scrupulous mode is to send prefetched instruction before need are with data earlier, make that high-speed cache can be before need be with data, from internal memory extract data-with the execution parallel processing of other instruction main program flow (primary program flow).Then when execution one instruction (subsequence instruction) subsequently, and when needing data that access looked ahead, these data get final product access (readilyaccessible) in high-speed cache neutrality, so program implementation can be because of waiting for from internal memory extraction data and stagnating.The data of having looked ahead can access only refer to that it will be read in high-speed cache neutrality.If these data of looking ahead can be changed by instruction subsequently, then program implementation will postpone to wait for that cache element goes for the exclusive entitlement of asking these data to bus.
As previously shown, the shared drive zone is applied in computer system now widely, with the usefulness of information communication that each inter-module is provided.Internal memory duplicates (copy) and is a very common computing in application program now.Internal memory duplicate computing for the content replication of a core position to another core position.With a video signal impact damper that will be transferred as the usefulness of demonstration is example, this video signal impact damper normally is made up of several minibuffer devices, be configured to finish the usefulness of conduct demonstration when one first minibuffer device, its content at the first minibuffer device promptly is copied to one second core position, this second core position is one of four picture quadrants, and after the picture quadrant of this second core position is configured to finish, content in this second core position promptly is copied to the 3rd core position, the resident content that whole image is arranged in the 3rd core position.Be familiar with many examples that data copied to the different work of the next position from a position that generation that this operator will find above-mentioned video signal buffer data is only carried out for application program.
Though internal memory duplicates computing and seems simple, under the hierarchical cache memory architectures of now microprocessor, those internal memories duplicate computing and in fact but bother very much.With regard to one now one in the high-speed cache disposed (allocated) and revised with regard to the first data structure SRC of (modified), desire to finish an internal memory duplicate computing must computing comprise: (1) disposes and guarantees the exclusive entitlement of one second data structure DEST, with the content replication of this first data structure SRC to this second data structure DEST; And (2) revise the content of DEST, makes the content of content match SRC of this DEST.But, as previously mentioned, if problem is this DEST and is not pre-configured in the high-speed cache, perhaps this DEST is for disposing, but when its state is shared, then desire to guarantee the exclusive entitlement of DEST, the execution that must stop application program is earlier sent suitable bus operation to wait for via rambus.
Moreover, above-mentioned problem can along with the memory range desiring to duplicate increasing and worsen.For instance, desire to obtain the exclusive entitlement of 100 fast line takings, the program that it caused dead time will be much larger than the program dead time that exclusive entitlement caused of desiring to obtain single fast line taking.
Therefore, need a kind of device and method to make the programmer be able to the exclusive entitlement that command processor is obtained one first section of fast line taking, and one second section of fast line taking is copied to this first section, and these data to duplicate be the execution that is parallel to programmed instruction subsequently.
In addition, also need a kind of device and method to make the programmer be able to command processor the data of this first section of fast line taking are write back to internal memory.
Summary of the invention
The present invention is as aforesaid other application case, be problem and shortcoming in order to overcome above-mentioned and other known technology, the invention provides one second section of a kind of better technology with the fast line taking in this high-speed cache of the section that is replicated in the fast line taking in the high-speed cache, and this to duplicate computing be to be parallel to application program instruction subsequently to carry out.In one embodiment, the invention provides a kind of micro processor, apparatus.This micro treatmenting device comprises translation logic circuit and execution logic circuit.The translation logic circuit is translated into a microinstruction sequence with the section configuration and the instruction of renaming, this microinstruction sequence is in order to command processor one first section of fast line taking to be configured to exclusive state, and with the content replication of one second section of fast line taking to this first section.Execution logic circuit is coupled to this translation logic circuit.This execution logic circuit receives aforementioned microinstruction sequence, send operation via rambus then, the line taking of seeking quickness is in this first section of exclusive state, after obtaining the exclusive entitlement of this first section, this execution logic circuit is about to the content replication of second section of fast line taking and arrives this first section.
An object of the present invention is to provide a kind of micro processor, apparatus to carry out the section computing of renaming.This device comprises section configuration and an instruction and the transfer interpreter of renaming.The configuration of this section is configured to exclusive state with one first section of designated command microprocessor with fast line taking of renaming, and with the content replication of one second section of fast line taking to this first section.Transfer interpreter then is to receive the section configuration and rename instruction, and the configuration of this section is translated into relevant micro-order with renaming to instruct, this relevant micro-order order execution logic circuit in microprocessor is sent a plurality of bus operations via a rambus, and this first section is arrived in the exclusive entitlement of first section of those bus job requirements this fast line takings and content replication that will this fast line taking second section.
The method that provides a kind of internal memory to duplicate is provided another object of the present invention.The method comprises the configuration of extraction (retrieving) section and renames macro instruction; Translate the configuration of this section and become a microinstruction sequence with the macro instruction of renaming, this microinstruction sequence is in order to ordering a microprocessor that one first section of fast line taking is configured to exclusive state, and this first section is arrived in the content replication of one second section of fast line taking; And, respond this microinstruction sequence, send the bus operation via rambus and be configured to exclusive state with this first section with fast line taking, and with the content replication of this second section of fast line taking to this first section.
Description of drawings
Aforementioned and other purpose, feature and advantage of the present invention after cooperating following explanation and accompanying drawing, can obtain better understanding:
Fig. 1 is the block scheme in effective pipeline stage of the current microprocessor of an explanation;
Fig. 2 is a block scheme, and it is described in carries out one of the computing of looking ahead and be interfaced to the cache element of internal memory in as described in Figure 1 the microprocessor;
Fig. 3 A and 3B are clock pulse figure, and its explanation is by being sent as Fig. 1 and the described microprocessor of Fig. 2, via the two kind possibility operation collection of rambus with the computing of looking ahead of execution;
Fig. 4 is the block scheme of extension prefetched instruction of the present invention;
Fig. 5 is a form, and the extension address location code field of the extension prefetched instruction that its explanation how will be is as shown in Figure 4 carried out the fast line taking that is in exclusive MESI state of looking ahead in order to command processor;
Fig. 6 is a block scheme, and it describes the present invention in detail and carries out a kind of microprocessor of looking ahead that intention stores computing;
Fig. 7 is a block scheme, and it is described in and carries out the cache element that is interfaced to internal memory of looking ahead that an intention stores computing in as described in Figure 6 the microprocessor;
Fig. 8 is a clock pulse figure, by illustrating that the present invention by being sent as Fig. 6 and the described microprocessor of Fig. 7, stores the bus operation of looking ahead of computing with the execution intention via rambus;
Fig. 9 is the block scheme of extension of section prefetched instruction of the present invention;
Figure 10 is a block scheme, and it is described in carries out a section and look ahead and store the cache element that one of computing is interfaced to internal memory in as described in Figure 6 the microprocessor;
Figure 11 is a clock pulse figure, by explanation the present invention by being sent as Fig. 6 and the described microprocessor of Figure 10, via rambus to carry out a section and look ahead and to store the bus operation of computing;
Figure 12 is for explaining orally the present invention in order to carry out the process flow diagram that intention stores the forecasting method of computing;
Figure 13 is for explaining orally the present invention in order to carry out the process flow diagram that intention stores the section forecasting method of computing;
Figure 14 is a form, and the extension address location code field of the extension prefetched instruction that its explanation how will be is as shown in Figure 4 carried out the configuration of a fast line taking and renamed in order to command processor;
Figure 15 is a block scheme, and it is described in the interior cache element that is interfaced to internal memory in order to the configuration and one of the fast line taking of renaming of microprocessor as described in Figure 6;
Figure 16 is for explaining orally the process flow diagram of the present invention with the method for the configuration and the fast line taking of renaming;
Figure 17 is a block scheme, and it is described in as described in Figure 6 the microprocessor cache element that is interfaced to internal memory in order to one of the fast line taking of the configuration and the section of renaming;
Figure 18 is the process flow diagram of the method for the explanation the present invention configuration and the cached data section of renaming;
Wherein, description of reference numerals is as follows:
100 pipeline microprocessors, 101 extraction procedures
102 translate the 103 temporary stages of stage
105 execute phases of 104 address phase
106 execution logic circuit, 107 data caches
108 internal memories, 109 cache bus
110 rambus, 120 program circuits
120~123 macro instructions
200 cache element interfaces, 201 microprocessors
202 macro instructions, 210 transfer interpreters
211 micro-orders, 220 cache element
221 record logical circuits, 222 data caches
223 stagnate signal 230 bus units
240 system memory bus
241 bus assemblies
242 data-carrier stores
301~302 job instruction collection
303~304 bus operations
400 extend prefetched instruction 401 preambles
402 look ahead operation codes 403 are extended addresses and are specified unit
600 microprocessors, 601 extraction logic circuit
602 instruction caches, 603 instruction internal memories
604 instruction queues, 606 translation logic circuit
607 extend the 608 micro-order formations of translation logic circuit
609 execution logic circuit
610 extend cache element
611 data caches 612 extend the record logical circuit
613 bus units, 614 data-carrier stores
615 rambus
700 block schemes, 701 microprocessors
702 macro instructions 710 are extended transfer interpreter
711 micro-orders
720 extend cache element
721 extend record logical circuit 722 data caches
723 stagnate signal 730 bus units
740 buses
741 bus assemblies
742 data-carrier stores
800 clock pulse Figure 80,1~802 bus operation
900 extension of section prefetched instructions, 901 preambles
902 repeat preambles 903 operation code of looking ahead
904 extend the address specifies unit
1000 block schemes, 1001 microprocessors
1002 macro instructions 1010 are extended transfer interpreter
1011 microinstruction sequences, 1012 framework buffers
1013 shadow counter buffer
1020 extend cache element
1021 extensions of section record logical circuit
1022 data caches
1023 stagnate signal 1030 bus units
1040 rambus
1041 bus assemblies
1042 data-carrier stores
1100 clock pulse Figure 110,1~1102 bus operation
1200~1220 intentions store the flow process of the method for looking ahead of computing
1300~1328 intentions store the flow process of the method that the section of computing looks ahead
1500 block schemes, 1501 microprocessors
1502 macro instructions 1510 are extended transfer interpreter
1505 framework buffers, 1511 micro-orders
1520 extend cache element
1521 extend cache logic circuitry
1522 data caches 1523 are stagnated signal
1524 source region S RC 1525 purpose region D EST
1530 bus units
1540 system memory bus
1541 bus assemblies, 1542 data-carrier stores
The rename flow process of method of 1600~1622 fast line takings
1700 block schemes, 1701 microprocessors
1702 macro instructions 1710 are extended transfer interpreter
1705 framework buffers, 1711 micro-orders
1712 framework buffers, 1713 shadow counter buffer
1720 extend cache element
1721 extension of section cache logic circuitry
1722 data caches 1723 are stagnated signal
1730 bus units, 1740 rambus
1741 bus assemblies, 1742 data-carrier stores
The rename flow process of method of 1800~1830 cached data sections
Embodiment
The following description is under the train of thought of a specific embodiment and necessary condition thereof and provide, and can make general those of ordinary skill in the art can utilize the present invention.Yet the various modifications that this preferred embodiment is done are apparent for those of ordinary skill in the art, and, in this General Principle of discussing, also can be applied to other embodiment.Therefore, the present invention is not limited to this place and puts on display specific embodiment with narration, but has the maximum magnitude that the principle that place therewith discloses conforms to novel feature.
How preamble carries out the computing of looking ahead at pipeline microprocessor now, has done the discussion of background, in view of this, at Fig. 1 to Fig. 3 B, will present the example of emphasizing the existing technology limitation of looking ahead.And then, at Fig. 4 to Figure 18, will present discussion of the present invention.The invention enables the programmer to be able to command processor one first section of the fast line taking in its high-speed cache is configured to exclusive MESI state, and with the content replication of one second section of fast line taking to this first section, therefore, avoided because any program delay that is caused that duplicates of data is carried out in storage computing subsequently.
See also Fig. 1, it is a block scheme, in order to the effective pipeline stage 101-105 of explanation in pipeline microprocessor 100 now.This microprocessor has 101, one of extraction stages to translate 102, one temporary stages of stage, 103, one address phase 104, and an execute phase 105.
When running, this extraction stage 101 extracts (retrieve) macro instruction 121-123 and carries out for microprocessor 100 from a range of instructions 120 of Installed System Memory.This macro instruction 121-123 then is sent to the stage of translating 102.This is translated the stage 102 macro instruction 121-123 is translated into corresponding micro-order (or claiming primary instruction) sequence (not shown), and this microinstruction sequence is the computing of command processor 100 imperative macro 121-123 appointments.Be very similar to factory products and be assemblied in the mode of linear flow through continuous workstation, this micro-order also is synchronized with subsequently the stage 103-105 of flowing through in pipeline of pipeline time pulse signal (not shown).According to aforementioned manner, micro-order is sent to the temporary stage 103.If there is a specific micro-order to specify an operand to be stored in the buffer in temporary stage 103, but then logic in that this buffer of access and will deliver to address phase 104 together extracting this operand in company with this specific micro-order.Address phase 104 comprises in order to produce the address is stored in the operand in the data-carrier store 108 with access logic.Similar in appearance to the temporary stage 103, this address phase 104 along with corresponding micro-order, is sent to the execute phase 105 also with the address that is produced.
Execute phase 105 is carried out the specified computing of this micro-order.At current microprocessor 100, the form of computing is determined by instruction set architecture (instruction set architecture), but be familiar with this operator and will find that these computings can not exceed general category, for example logical operation, arithmetical operation, and memory access computing (in other words, data read and data write computing).By the result that computing produced who carries out appointment,, promptly be the core position that is written in the data-carrier store 108 if not be stored in the buffer in temporary stage 103.
Be familiar with this operator and will find that perhaps pipeline microprocessor 100 now has the more stage than the 101-105 of Fig. 1, because be kind of a verified technology that increases the quantum of output of instruction 121-123 in the pipeline (pipeline) with the number that increases the stage in the pipeline via the principal function of decomposing in the pipeline.For brevity, the pipeline stage 101-105 of current microprocessor 100 as shown in Figure 1 has been enough to illustrate the shortcoming of prior art, and does not need to increase with incoherent details reader's burden.
It should be noted that in current microprocessor 100, its execute phase 105 also has execution logic circuit 106 except a data cache 107 is arranged.The running of this data cache 107 is parallel with the execution of instructing in pipeline stage 101-105, this function mode guarantees that those have the height possibility by the data of the instruction 121-123 institute access of application program high-speed cache 107 Already in, therefore work as the data access micro-order (in other words, load internal memory or store the internal memory micro-order) proceed to 105 o'clock execute phases, this execution logic circuit 106 can be in the access of interior these data of execution in one or two pipeline clock pulse cycle, rather than cause may the hundreds of clock pulse cycle program delay, only because wait for via rambus 110 to data-carrier store 108 to carry out the access of these data.In an effective cache systems configuration, the loading of data and store exhausted major part and occur between the execution logic circuit 106 and data cache 107 via cache bus 109, and the function mode of data cache 107 is relative transparent in the micro-order flow process of flowing pipe linearize stage 102-105, and this function mode guarantees that the copy of getting soon of data entity is synchronous and consistent with Installed System Memory 108.
(shared invalid) is an agreement of generally using to MESI (revising, exclusive, share, invalid) for modified, exclusive, and this agreement is in order to guarantee its cache entry purpose consistance in the shared region of the internal memory 108 of a system configuration.Though in Fig. 1, do not describe, in order to use the purpose of same data operation, other assembly (not shown) in a computer system configurations be can shared drive 108 some zones.For example, the video signal card can with a zone of microprocessor 100 shared drives 108 so that the monitor video data that access microprocessor 100 is produced.Another example then is that the multiplex assembly on system bus 110 can write each other via the data read of the shared region from data-carrier store 108 and data and communicates.Being described in detail of framework Journal of Sex Research to provide the motivation of using the MESI agreement not in application of the present invention; Only need to understand the conforming widespread usage of MESI data between definite Installed System Memory 108 and regional cache structure 107 herein.
Because need the hundreds of clock pulse cycle just can finish, so data are by with the section form that comprises several bytes shift-in and shifting out in data cache 107 via the operation of rambus 110.What these sections claimed is fast line taking.Though fast line taking line width (in other words, the byte-sized of fast line taking) can change because of different frameworks, system architecture now very common 32-byte line width arranged, or 64-byte line width, or even 128-byte line width.
Even the cache structure of full blast 107 is from internal memory 108, via rambus 110, shift to the data of carrying out of high-speed cache 107, can postpone to some extent inevitably.But providing after a fast line taking gives high-speed cache 107, great delay promptly can not take place in its access to the data entity in this fast line taking subsequently, because the speed of high-speed cache 107 and cache bus 109 thereof is close with the speed (for example execution logic circuit 106) of microprocessor 100 interior other logical circuits.
According to the MESI agreement, the fast line taking in area data high-speed cache 107 can be in following arbitrary four kinds of states: revise, and exclusive, share, and invalid.The one fast line taking of revising state is meant that being taken at line soon at this carries out after a zone stores computing, but as yet not with primary memory 108 synchronized fast line takings.(memory operation via its rambus 110 that is also referred to as bus assembly is the responsibility of regional high-speed cache 107 from other assembly in monitoring, if therefore bus assembly requires data from a fast line taking of revising state, the data that then regional high-speed cache 107 can will be revised are delivered to the bus assembly that this requires data.This kind is called bus to the monitoring of bus 110 and spies on pattern (bus snooping).
The fast line taking of one exclusive state is meant that its regional high-speed cache 107 can be taken at line soon at this and carry out the fast line taking that stores computing.Exclusive state hints that its regional high-speed cache 107 has the exclusive entitlement to this fast line taking; Microprocessor 100 is its content of licensed modification therefore.
The one fast line taking of sharing state is meant the fast line taking in the regional high-speed cache 107 that is present in two or several assemblies on bus 110.Therefore, arbitrary assembly all can be from the fast line taking reading of data of sharing, but all not licensedly goes to revise its content.For revising data (in other words at the line of sharing that is taken at soon, carry out and store computing), before revising its content, assembly 100 needs to implement the exclusive entitlement (in other words, read the fast line taking that be in exclusive MESI state arrive its regional high-speed cache 107) of suitable operation to obtain this fast line taking via rambus 110 and other assembly earlier.In case obtain the exclusive entitlement of this fast line taking, can carry out the storage computing, and should fast line taking Status Change be the modification state.Before bulletin (posting) stored computing, the exclusive entitlement of the line taking of in advance will seeking quickness can guarantee the consistance of data, because a time point in office all has only an assembly 100 can revise the content of its fast line taking.
When regional high-speed cache 107 detects (via the pattern of spying on) via the write operation of rambus 110 to its fast line taking, or sending the bus operation when obtaining the exclusive entitlement of this fast line taking via rambus 110 when another assembly, this fast line taking state promptly changes to disarmed state.It is inconsistent and can't be read or write that one fast line taking is denoted as data and the internal memory 108 of invalid representation in it.
Because the running of a high-speed cache 107 is parallel with the instruction flow in microprocessor pipeline, in program circuit 120, the deviser needed before the access of data is required, provide the macro instruction 122 of looking ahead loading these data to a high-speed cache 107 earlier, what overcome therefore that data must be initial is extracted into the shortcoming of the delay that high-speed cache 107 caused from internal memory 108.In program circuit 120, there is a prefetched instruction 122 to load a fast line taking from internal memory 108 usually to order its regional high-speed cache 107, and this prefetched instruction 122 is parallel with the execution of instructing subsequently, therefore when the instruction 123 of program circuit 120 will be from this fast line taking access data, this fast line taking had been stored in its high-speed cache 107.In the example of Fig. 1, one prefetched instruction 122, PREFETCHTO[EAX], the fast line taking that order is located by the content of buffer EAX is loaded into high-speed cache 107, makes that its content can be by data access command 123 subsequently, MOV EBX, [EAX], use when being performed in data stream 120, these data access command 123 command processors 100 are from the address reading data by buffer EAX appointment, and it is moved to buffer EBX.Because the x86 instruction is by cognitive widely, for brevity, the description of prefetched instruction 122 in the aforementioned data stream 120 and data access command 123 is the instruction set architectures according to x86 traditionally.But being familiar with this operator will find, also provide prefetched instruction 122 to read the regional high-speed cache 107 of a fast line taking to from internal memory 108 at many other instruction set architectures, make subsequently instruction 123 carry out a data designated and read computing and can not postpone from this fast line taking with command processor 100.If prefetched instruction is very wise placing in the data stream 120, then can effectively overcome because of the delay that initial access data caused the execution speed of therefore significant improvement program at high-speed cache 107.After a operation via rambus 110 of finishing the computing of looking ahead, its required fast line taking is not with exclusive state (if should zone high-speed cache 107 have this fast line taking unique when getting copy soon), promptly be with coexisting state (if other assembly also have the fast line taking of this demand get copy soon the time) be present in the high-speed cache 107.No matter it is at that state, the data entity in this fast line taking can be read (read) access immediately.But specified as described above, for writing data into a fast line taking (in other words, carry out and store computing), need have the exclusive entitlement of this fast line taking.Therefore, computing causes getting soon the fast line taking that is in exclusive state if this is looked ahead, and then a storage co-pending can be at once to this fast line taking bulletin.But if should be in shared state from the fast line taking of bus 110, then a storage co-pending must be stagnated (stalled), sends operation to obtain the exclusive entitlement of this fast line taking to wait for this cache element 107 via bus 110.This fast line taking was sent to high-speed cache 107 under exclusive state after, then this storage co-pending can be announced.
Now see also Fig. 2, it is a square frame Figure 200, in order to carry out the cache element that one of the computing of looking ahead is interfaced to internal memory in the microprocessor that is described in Fig. 1.This square frame Figure 200 is presented at and is applied to the logic of carrying out the computing of looking ahead in the microprocessor 201.One transfer interpreter 210 is arranged to accept a macro instruction flow process 202 and it is translated into corresponding micro-order 211 in this microprocessor 201.The micro-order 211 of internal memory 242 being done data loading and storage computing in order to order promptly is sent to a cache element 220 subsequently.This cache element 220 comprises a record logical circuit 221 and a data cache 222.This record logical circuit 221 is coupled to a bus unit 230.These bus unit 230 interfaces are to a system memory bus 240, this system memory bus and couple with Installed System Memory 242 and other bus assembly 241.
Exemplary process 202 explanation of macro instruction how to specify one look ahead computing and how can on these data of looking ahead, carry out subsequently read and store computing.One about this sequence of operations on the table the common example on the type computing machine be reading of internal memory inside counting device and increase.Looking ahead of one sequence read, and store computing both need can be in fast line taking reading of data, also can in fast line taking, revise data.Therefore, first macro instruction 202 of exemplary process, PREFETCH[EAX], the command processor 201 fast line taking of an address of going to look ahead corresponding to buffer EAX content.Second macro instruction 202, MOVEBX, [EAX], command processor 201 go to read the content that an address is the specified core position of buffer EAX, and this content is write buffer EBX.The 3rd macro instruction 202, INC EBX, command processor 201 increases the content of buffer EBX.The 4th macro instruction 202, MOVEAX, [EBX], command processor 201 store the content of this buffer EBX on corresponding to the core position of buffer EAX content.Above-mentioned detailed looking ahead read, and store computing only be increase a numerical value to the address by the specified internal memory of buffer EAX.It should be noted that in order effectively to utilize this prefetched instruction 202, must be second macro instruction 202, MOV EBX, [EAX] provides this prefetched instruction 202 fully before, loading makes because by the delay that is caused of the specified fast line taking of the content of EAX, can be absorbed by the parallel execution of the macro instruction 202 of intermediary.But for brevity, the macro instruction 202 of this intermediary is not narrated in square frame Figure 200.
The 3rd macro instruction 202 is translated into the increase micro-order 211 of a correspondence, INC EBX, and this designated command execution logic circuit increases the content of buffer EBX.Because do not need new data, so this loading micro-order 211 can not be sent to cache element 220.
At last, the 4th macro instruction 202 on stream is translated into one and stores micro-order 211, ST[EAX], EBX,
This designated command execution logic circuit goes to carry out a data storing computing, is written to the address by the specified core position of buffer EAX content with the content with buffer EBX.Thereby this storage micro-order 211 is sent to its cache element 220 with storage operational form co-pending.So it is that the fast line taking of target is present in this data cache 222 that record logical circuit 221 detects with storage computing co-pending.If this fast line taking is to be in exclusive state, then this storage co-pending can be announced and Status Change that will this fast line taking is the modification state at once.On the other hand, if this fast line taking is to be in shared state, then this cache element 220 determines that one stagnates signal 223 to suspend this carry out of micro-order 211 in the pipeline stage of microprocessor 201, its bus unit 230 of 221 orders of this execution logic circuit simultaneously, via its rambus 240, carry out operation to obtain the exclusive entitlement of this fast line taking.In case obtain exclusive entitlement, then can permit storage co-pending that its data are announced in this fast line taking, and stop stagnating signal, thereby continue program implementation.
Consider now that a kind of operational form is simple writes data in the internal memory and do not need first reading of data, or a kind of operational form is can first reading of data, but this form is determined expection and is had one to store the computing meeting and announced subsequently.In these cases, having only the prefetched instruction that the case of the first reading of data of those palpuses is carried out in advance is to determine program delay to be reduced to minimum.And in this case,, then can get rid of because of storing the program delay that computing caused if the result who looks ahead makes desired fast line taking be in exclusive state.But if the result who looks ahead makes desired fast line taking be in shared state, then will be inevitable because of storing the program delay that computing caused.This is a problem, because instruction set architecture does not now provide a method to go the exclusive fast line taking of looking ahead to data cache 222 with command processor 201.Though it can be exclusive responding a fast line taking of looking ahead computing, this state can not guarantee.This is because its data of looking ahead of inferring of macro instruction frameworkization of looking ahead are to be read, and its result is extracted via this fast line taking of job requirements of system bus, no matter whether it is in shared state.For instance, in the x86 framework, the operation of sending via bus 240 of the execution result of an x86 prefetched instruction is a data read computing.This data read computing requires the copy of a fast line taking, and no matter it is to be in which kind of state.
Now see also Fig. 3 A and 3B, it shows a clock pulse figure, in order to two possible bus operation collection 301,302 that the microprocessor of describing by Figure 1 and Figure 2 201 is sent, this bus operation is to send to carry out one via rambus 240 to look ahead and subsequently storage computing.This two operations collection 301,302 is included in 240 the request job 303 from bus unit 230 to rambus in the microprocessor 201, with the response operation 304 of getting back to bus unit 230 equally in microprocessor 201 from rambus 240.Operation collection 301 is described those operation 303-304 performed when the response one fast line taking of looking ahead computing is exclusive state.Operation collection 302 is described those operation 303-304 performed when the response one fast line taking of looking ahead computing is shared state.Described as Fig. 2, when carrying out a prefetched instruction, record logical circuit 221 its bus units 230 of order send a data read request 303, DATA READ[EAX to its rambus 240], require to be buffered the specified fast line taking of device EAX and be sent to its regional high-speed cache 222.It is to send in time point A that this data read request 303 collects 301 in operation, and collecting 302 in operation is to send in time point D.So sending one, 240 responses of this rambus comprise that the data response request 304 of this desired fast line taking gets back to bus unit 230.If this fast line taking is at exclusive state, then in the data response request 304 of operation collection 301, DATA RESP[EAX] .E, be sent back to bus unit 230 at time point B.If this fast line taking is in shared state, then in the data response request 304 of operation collection 302, DATARESP[EAX] .S, be sent back to bus unit 230 at time point E.At this moment, data can read and can not cause the bus Operating Ratio from high-speed cache 222.
When fast line taking that subsequently a storage computing provides in the face of this above-mentioned operation, the scene explanation of operation collection 302 for can announce this storage computing the operation 303,304 that must take place.In operation collection 301, promptly be exclusive state because line taking is initial soon, desire is announced this storage computing need only send a data write operation 303, DATA WRITE[EAX at time point C], write data into internal memory 242 via bus 240.But shown in operation collection 302, before the data write operation 303 of time point H can be issued, must the execution time point F of elder generation and the operation 303 and 304 of G, so as can with the entitlement state of fast line taking by share rise to exclusive.At time point F, bus unit 230 sends a data read and invalidation request 303, DATA READ/INV[EAX], in order to require the exclusive entitlement of the fast line taking of this shared state.Time point G after hundreds of clock pulses receives a response request 304, DATA RESP[EAX from bus 240] .E, be upgraded to exclusive state with state that will this fast line taking.After time point G receives response request 304, begin to bus 240 bulletins at this data write operation 303 of time point H then.
It should be noted that the operation collection 301,302nd among Fig. 3 A and the 3B, represent, bus operation 303,304 is described because different microprocessor architecture designs is used different semantemes in general mode.In addition, merit attention for brevity, in the clock pulse figure of Fig. 3 A and 3B, omitted all operations that obtain earlier data bus 240 accesses (for example BUS REQUEST, BUS GRANT or the like).
The present invention observes current data pre-fetching instruction and is subject to the storage computing that it does not support to determine expection, therefore can't because of clear and definite intention carry out to this fast line taking carry out one store computing and the favourable fast line taking of looking ahead to high-speed cache 222, no matter an intention stores the reading whether prior to this fast line taking being announced a storage computing of this fast line taking content of looking ahead of computing.If go through operation 303,304 in the operation collection 302, the fast line taking that is in shared state of looking ahead clearly, only just helpful under reading of this fast line taking can be early than the situation that its bulletin one is stored computing.Will be if one stores computing to this fast line taking bulletin of sharing, then program implementation must be delayed in case state that will this fast line taking from shared rise to exclusive.
Though the programmer understands the restriction of prefetched instruction now, yet still use them and carry out looking ahead under the storage situation for making intention, because this prefetched instruction may (sometimes but not often) when response one data read request, obtain the exclusive entitlement of a fast line taking, only because there is not the copy that other bus assembly has the fast line taking of this requirement.But better situation then is to avoid looking ahead one being in the fast line taking of shared state, but orders a microprocessor 201 fast line taking that is in exclusive state of going to look ahead.The present invention points to a kind of apparatus and method, these apparatus and method are to obtain the exclusive entitlement of fast line taking first section, in order to data are copied to fast line taking first section from fast line taking second section, and optionally data are write back to internal memory from fast line taking second section, and be released the resource of this high-speed cache.The present invention is existing to discuss with reference to Fig. 4 to Figure 18.
See also Fig. 4, it is to show that one extends the block scheme of prefetched instruction 400 according to the present invention.This extension prefetched instruction comprises the multiple preamble entity of selecting for use 401, is thereafter the operation code 402 of looking ahead, and then is thereafter that unit 403 is specified in an extension address.In one embodiment, each preamble and extension address entity 401,403 all are 8 sizes, and the operation code of looking ahead entity 402 then is one or two byte-sized, unless revision separately herein, all entity 401-403 are all consistent with the instruction set architecture of x86.
In running, this operation code 402 of looking ahead is a specify arithmetic yardage value, carries out the computing of looking ahead in order to order the microprocessor that conforms to.In the specific embodiment of an x86, the appointment numerical value of its operation code entity 401 is OF18H.One or several preamble entities 401 of selecting for use can be used to the additional computing that microprocessor that order one conforms to is carried out some type, for example define the repetitive operation (for example REP preamble in the x86 framework) of number of times, force and carry out atomic operation (for example LOCK preamble in the x86 framework) or the like by a counter.Extension address appointment unit 403 is the execution in order to the computing of looking ahead of specifying this particular type.In the specific embodiment of an x86, it is ModR/M bytes 403 that unit 403 is specified in known extension address.
According to the present invention, when detecting one, extends when looking ahead macro instruction 400 by microprocessor, according to specifying the specified indication numerical value of first 403 contents by extending the address, this microprocessor is gone to carry out from the internal memory prefetch data to high-speed cache by order, and its example will be discussed in Fig. 5.
Fig. 5 is a form 500, it specifies a specific embodiment of first field 403 for the extension address in the extension prefetched instruction of Fig. 4, explains orally should extend the address according to the present invention and specify first field 403 how to encode with the command processor execution fast line taking that is in exclusive MESI state of looking ahead.For the purpose of the present invention is described, use the ModR/M bit field meet the x86 framework herein, still, can be contemplated that the present invention comprise any provide with one look ahead-exclusive indication is encoded to the framework of the instrument of instruction 400.Though the example of Fig. 5 points to and will look ahead-exclusive (or intention store look ahead) indication is encoded to one extends the address and specify unit 403, be familiar with this operator and will find that this looks ahead to indicate and also can be encoded into a specify arithmetic yardage value in operation code field 401.
In this coding example ,-x86 ModR/M byte is used 5:3 position coding one computing of looking ahead by the operation code 401 specified forms of looking ahead of this ModR/M byte.Now, this x86 prefetched instruction uses numerical value 000,001,010, and 011 reads the indication of looking ahead of computing with the regulation intention.All these four numerical value 000-011 are for order one x86 microprocessor, and under the degree of approach in various degree, prefetch data is to its high-speed cache.For instance, a TO indication (in other words, numerical value 001), command processor is looked ahead a fast line taking to all levels of cache hierarchy system, and a NTA directive command microprocessor is looked ahead a fast line taking to a nonvolatile cache structure, and enter a position near processor, degree simultaneously minimizes cache pollution.But x86 looks ahead the universals of indication 000-011 coding be a data read request of sending via bus to require the copy of a fast line taking, can't lie in this fast line taking is to be in which kind of MESI state.A specific embodiment of the present invention is specified the extra indication extension address of encoding in the unit, utilizes one exclusive (.S) indication go the to look ahead fast line taking of an appointment in order to command processor according to the present invention.Fig. 5 shows that the 5:3 position of a usefulness x86 ModR/M byte is encoded into looking ahead-exclusive indication of numerical value 100.When this prefetch.s indication was encoded to a prefetched instruction 400 according to the present invention, then a microprocessor that conforms to can be detected via a rambus and be sent operation is in exclusive MESI state to look ahead data.In the specific embodiment of an x86, shown in Fig. 3 B, its operations specific of sending in response to the prefetch.s indication of prefetched instruction 400 is a data read and invalid operation as described above.In the example of Fig. 3 B, this data read and invalid operation are in order to fast line taking is promoted to exclusive state from shared state.
In the x86 instruction set architecture, the 5:3 position of numerical value 100 coding is declared to be illegal before this, shown in the 5:3 position coding of numerical value 101-111 in the form 500.It is one unusual that one illegal ModR/M byte code causes.But according to the present invention, in the specific embodiment of an x86, this improvement one looks ahead-the extra coding of exclusive indication is legal, and will cause the aforesaid bus operation fast line taking that is in exclusive state of looking ahead.
As everyone knows, because the interaction between high-speed cache structure and internal memory is not to be present in the instruction flow of microprocessor pipeline,, looking ahead of extending that prefetched instruction 4400 can only require carry out so being indication according to being provided.If a fast line taking when shared, then can not carried out the computing of looking ahead by memory access now.But if a fast line taking is just occupied, the computing of then looking ahead must be postponed, up to the access of finishing internal memory.
Now see also Fig. 6, it is a block scheme, and the microprocessor of looking ahead 600 that stores computing according to execution one intention of the present invention is described in detail in detail.This microprocessor 600 has three noticeable stage categories: extract, translate, and carry out.Extraction logic circuit 601 in the stage of extraction is in order to extract macro instruction to an instruction cache 602 from an instruction internal memory 603.This macro instruction that is extracted is sent to the stage of translating via an instruction queue 604 then.This translation logic circuit 606 of translating the stage is coupled to a micro-order formation 608.This translation logic circuit 606 comprises extension translation logic circuit 607.The execution logic circuit 609 of execute phase comprises that one extends cache element 610.This extension cache element 610 has a data cache 611, and this data cache 611 is coupled to and extends record logical circuit 612.This extension record logical circuit 612 is coupled to a bus unit 613.This bus unit 613 is coupled to a data-carrier store 614.
In running, extraction logic circuit 601 extracts the formative instruction cache 602 that instructs according to the present invention from instruction internal memory 603, then this macro instruction is delivered to instruction queue 604 according to execution sequence.After this macro instruction is extracted from instruction queue 604, be sent to translation logic circuit 606.This translation logic circuit 606 is translated into corresponding microinstruction sequence with each macro instruction of delivering to herein, and this microinstruction sequence is to carry out by the specified computing of this macro instruction in order to command processor 600.Extend 607 in translation logic circuit and go detecting to extend the macro instruction of looking ahead, and specify first entity to prepare for it being translated into corresponding extension preamble and address according to the present invention.In the specific embodiment of an x86, this extension translation logic circuit 607 is configured to detect an x86 prefetched instruction, and according to Fig. 4 and the described routine of Fig. 5 the ModR/M byte of this x86 prefetched instruction is translated into a prefetch microinstruction sequence, this sequence is to go the exclusive fast line taking of looking ahead to data cache 611 in order to command processor 600.
This micro-order is sent to its execution logic circuit 609 from micro-order formation 608 then, and 610 of extension cache element in this execution logic circuit are configured to carry out an exclusive computing of looking ahead according to the present invention.When this execution logic circuit 609 is carried out a prefetch microinstruction sequence, it extends record logical circuit 612 command line unit 613, via rambus 605, send operation and require under exclusive MESI state, to look ahead the fast line taking of an appointment in data cache 611 to internal memory 614.
Be familiar with this operator and will find the just representative of simplifying a pipeline microprocessor 600 afterwards according to the present invention of the described microprocessor 600 of Fig. 6.In fact, as previously mentioned, pipeline microprocessor now comprises many pipeline stage.Be generalized into three phases group shown in the block scheme of Fig. 6 but these stages all can summarize, so the block scheme of Fig. 6 can be considered the explanation of the specific required neccessary composition of realizing the invention described above.For brevity, have nothing to do in composition of the present invention all not in this description in all microprocessors 600.
See also Fig. 7, it is a block scheme 700, looks ahead and stores the cache element that one of computing is interfaced to internal memory in order to carry out one in the microprocessor that is described in Fig. 6.This block scheme 700 is presented at and is applied to the logic of carrying out the computing of looking ahead in the microprocessor 600.The extension transfer interpreter 710 of microprocessor 701 receives macro instruction stream 702, and this macro instruction stream 702 is translated into corresponding micro-order 711.Micro-order 711 promptly is sent to one subsequently and extends cache element 720 order is done data load and stored computing to internal memory 742 after.This extension cache element 720 comprises extends a record logical circuit 721 and a data cache 722.This extension record logical circuit 721 is coupled to a bus unit 730.This bus unit 730 is to be interfaced to a system memory bus 740, and this system memory bus 740 couples with data-carrier store 742 and other bus assembly 741 again.
Therefore, one of exemplary process is extended prefetched instruction 702, PREFETCH.S[EAX], command processor 701 goes the exclusive fast line taking of its address corresponding to buffer EAX content of looking ahead.Second macro instruction 702, MOV EBX, [EAX], command processor 701 go to read the content that an address is the specified core position of buffer EAX, and this content is write buffer EBX.The 3rd macro instruction, INC EBX, command processor 701 increases the content of buffer EBX.The 4th macro instruction, MOV EAX, the content of [EBX], command processor 701 Storage Register EBX on corresponding to the core position of buffer EAX content.It should be noted that in order effectively to utilize exclusive prefetched instruction 702, PREFETCH.S[EAX], must be second macro instruction 702, MOV EBX, [EAX], fully carry out exclusive prefetched instruction 702 before, make, can be absorbed by the parallel execution of the macro instruction 702 of intermediary because load by the specified delay that fast line taking caused of the content of EAX.But, for brevity, not narration in block scheme 700 of the execution of the macro instruction 202 of this intermediary.
Transfer interpreter 710 is translated into corresponding exclusive prefetch microinstruction 711, PREFETCH.S[EAX with this extension macro instruction 702 of looking ahead], so this micro-order is delivered to and extended cache element 720.This extension record logical circuit 721 inquiry data caches 722 with determine desired fast line taking and whether existed and effectively (in other words, the promptly non-disarmed state that is in) in its data cache 722.If answer then should be extended record logical circuit 721 command line unit 730 for not,, send operation to obtain this desired fast line taking from internal memory 742 via system memory bus 740.If other bus assembly 741 does not all have the copy of this desired fast line taking, then extension record logical circuit 721 is about to desired fast line taking and delivers to its data cache 722 with exclusive state.If when having a bus assembly 741 to have the regional copy of the fast line taking that is in exclusive state of this requirement, then according to applied specific bus operating agreement, this agreement spy on bus 740 operation with this fast line taking of request and its regional copy is altered to invalid.If this zone copy is modified, then its bus assembly is written to its bus 740 with the data of revising, and makes microprocessor 701 can obtain the exclusive entitlement of this fast line taking.If there are several buses to share this fast line taking, then these bus assemblies all are altered to its regional copy invalidly, and feasible this fast line taking can be sent to microprocessor 701 under exclusive state.In above-mentioned arbitrary situation, this desired fast line taking all can be sent to high-speed cache 722 under exclusive state, and can be used by storage computing subsequently.
The 3rd macro instruction 702 is translated into the increase micro-order 271 of a correspondence, INC EBX, and this instruction is the content increase of command execution logic with buffer EBX.Because do not need new data,, this loading micro-order 711 do not extend cache element 720 so can not being sent to it.
At last, the 4th macro instruction 702 on stream is translated into one and stores micro-order 711, ST[EAX], EBX, this designated command execution logic circuit goes to carry out a data storing computing, is written to the address by the specified core position of buffer EAX content with the content with buffer EBX.Thereby this storage micro-order 711 is sent to its cache element 720 with storage operational form co-pending.So it is that the fast line taking of target is present in its data cache 722 that record logical circuit 721 detects with storage computing co-pending, and because the exclusive result who looks ahead, this fast line taking is to be in exclusive state.What therefore this storage can not be delayed is announced at once.Different with the microprocessor 201 of Fig. 2 is need not establish one according to extension cache element 720 of the present invention and stagnate signal 723 to announce this storage co-pending, because the fast line taking of this target is by exclusive looking ahead.
Now see also Fig. 8, it shows a clock pulse Figure 80 0, and in order to describe according to the bus operation collection 801,802 that microprocessor sent of the present invention by Figure 6 and Figure 7, this bus operation is to carry out the storage computing that intention is looked ahead via rambus 740.This two operations collection 801,802 is included in 740 the request job 801 from bus unit 730 to rambus in the microprocessor 701, and is getting back to the response operation 802 of bus unit 730 from rambus 740 equally in microprocessor 701.Clock pulse Figure 80 0 describes the fast line taking that is required and is in exclusive state when, for according to of the present invention looking ahead-when the specified intention of exclusive macro instruction stores the response of looking ahead of computing, performed operation collection 801,802.According to top narration, one look ahead when carrying out-during exclusive instruction, extend record logical circuit 721 its bus units 730 of order its rambus 740 is sent a data read and invalidation request 801, DATAREAD/INV[EAX], requirement will be buffered the specified fast line taking of device EAX and deliver to its regional high-speed cache 722 with exclusive MESI state.This data read and invalidation request 801 are to send in time point A.If desired fast line taking is to be in exclusive MESI state, then at time point B, data response request 802, DATA RESP[EAX are sent in these rambus 740 responses] .E, get back to bus unit 230.At this moment, data that store computing can be read or written to this high-speed cache 222 from this high-speed cache 222, and can not cause the bus Operating Ratio.Shown in clock pulse Figure 80 0 of Fig. 8, when the save command command processor subsequently of one in the program flow is revised this fast line taking that is in exclusive state, at time point C one data write operation 801 can take place promptly, DATA WRITE[EAX], be familiar with this operator and will find that this data write operation 801 is unnecessary fully, because this operation 801 is not the necessary operation of exclusive entitlement that obtains a fast line taking.
Shown in Fig. 3 A and 3B, the operation collection 801,802nd among Fig. 8 is represented in general mode, because different microprocessor architecture designs is used different semantemes bus operation 801,802 is described.The operation collection of describing among Fig. 8 801,802 is substantially according to the convention of x86, but this description is intended to describe the present invention.This convention can't limit the present invention and only be applicable to this particular, instruction set framework.In addition, it should be noted that for brevity, in clock pulse Figure 80 0, omitted to obtaining all operations of access to data bus 740 (for example BUS REQUEST, BUS GRANT or the like) earlier.
The present invention does not just consider the exclusive single fast line taking of looking ahead, and comprises the data conditions that needs to revise a section simultaneously yet.Therefore, Fig. 9 to Figure 11 will specifically point to the discussion of the data of the exclusive section of looking ahead.
See also Fig. 9, it is in order to show the block scheme of an extension of section prefetched instruction 900 according to the present invention.This extension of section prefetched instruction 900 comprises the multiple preamble entity of selecting for use 901, and one is a repetition preamble 901.After this preamble entity 901 is the operation code 902 of looking ahead, and then is thereafter that unit 903 is specified in an extension address.In one embodiment, each preamble and extension address entity 901,903 all are 8 sizes, and the operation code of looking ahead entity 902 then is one or two byte-sized, unless revision separately herein, all entity 901-903 are all consistent with the instruction set architecture of x86.In this specific embodiment, the repetition preamble (REP) 901 of this x86 is used to indicate the section computing of looking ahead.
In running, this operation code 902 of looking ahead is a certain operations yardage value, and this certain operations yardage value is in order to order the microprocessor that conforms to carry out the computing of looking ahead.In the specific embodiment of an x86, the special value of operation code entity 901 is OF18H.Extending the address specifies first 903 to be in order to specify the execution of the computing of looking ahead of carrying out this particular type.In the specific embodiment of an x86, it is ModR/M bytes 903 that unit 903 is specified in this extension address.
With shown in Figure 4, according to the present invention, when microprocessor detects one when looking ahead macro instruction 900, this microprocessor is ordered according to specifying the specified indication numerical value of first 403 contents by extending the address, goes to carry out from the internal memory prefetch data to high-speed cache.The coding example of Fig. 5 also is applicable to and is described in the coding that section extends the preamble indication of specifying unit 903 in the address.But, microprocessor repeats preamble 901 if detecting one in this extension prefetched instruction, then microprocessor can attempt to look ahead the fast line taking of a specified quantity under exclusive state in its regional high-speed cache, and the quantity of this fast line taking is to be specified by the framework buffer in the microprocessor.In one embodiment, the quantity of fast line taking is specified by the buffer ECX in the microprocessor of x86 compatibility.
See also Figure 10, it is a block scheme 1000, looks ahead and stores the cache element that one of computing is interfaced to internal memory in order to carry out sections in the microprocessor 600 that is described in Fig. 6.The authentication of the assembly in the microprocessor 1001 of Figure 10 and computing are similar to the similar assembly in the microprocessor 701 of Fig. 7, and hundred figure place figure numbers, 7 usefulness 10 that need only Fig. 7 replace.For foundation the present invention improving the computing of looking ahead of this section, the present invention uses one to extend transfer interpreter 1010 and will have an extension that repeats preamble 1002 computing of looking ahead to be translated into microinstruction sequence 1011, in order to the computing of looking ahead of command execution one exclusive section.In addition, also use a shadow counter buffer 1013 to load the number count of 1012 prefetched fast line takings in the framework buffer.And use extension of section record logical circuit (extended block filllogic) 1021 ordering its bus unit 1030 to require the fast line taking of exclusive this appointment section of looking ahead, and be sent to its data cache 1022.
Be the computing of looking ahead of initial one exclusive section, first macro instruction 1002, MOV ECX, COUNT, be in order to in the framework buffer ECX by the number count initialization of the fast line taking in the exclusive section of looking ahead.Extend transfer interpreter 1010 this first macro instruction is translated into loading micro-order 1011, LDECX, COUNT, this micro-order command processor will count (count) and load ECX.After this counting was loaded into ECX, this counting was also by transparent shadow counter buffer 1013, the SHECX of copying to.Simultaneously, 1002 of other instructions can be revised its framework buffer 1012 under the situation of the counting that does not interfere with the computing of looking ahead.
After counting is initialised, this extension transfer interpreter 1010 is translated an extension of section prefetched instruction 1002, REP.PREF.S[EAX], this designated command microprocessor 1001 is looked ahead and is arrived its regional high-speed cache by the fast line taking of the exclusive state of the specified quantity of ECX, and the address of this first prefetched fast line taking is specified by buffer EAX.To respond the microinstruction sequence 1011 of this exclusive computing of looking ahead of order, it is by the specified fast line taking that is in exclusive state of buffer EAX to require this address that these extension of section record logical circuit 1021 its bus units 1030 of order send bus via its rambus 1040.This record logical circuit 1021 is about to it and is configured to its data cache 1022 after receiving these fast line takings.Once enter into this high-speed cache with exclusive state, arbitrary or whole prefetched fast line takings all can be modified and can not cause extra delay.
Now see also Figure 11, it shows a clock pulse Figure 110 0, and in order to describe the bus operation collection 1101,1102 that is sent by Fig. 6 and microprocessor 1001 shown in Figure 10 according to the present invention, this bus operation is looked ahead with the execution section via rambus 1040 and stored computing.For the purpose of explaining orally conveniently, the system configuration of Figure 11 example is used the fast line taking of 32 bytes.But those of ordinary skill in the art will find from following illustration that application of the present invention comprises the fast line taking line width of all expected system configuration.This two operations collection 1101,1102 is included in 1040 the request job 1101 from bus unit 1030 to rambus in the microprocessor 1001, and is getting back to the response operation of bus unit 1030 from rambus 1040 equally in microprocessor 1001.Clock pulse Figure 100 0 describes the fast line taking that is required and is in a section of exclusive state when, for comprising one when repeating response that a section that the looking ahead of preamble-specified intention of exclusive macro instruction stores computing looks ahead according to of the present invention, performed operation collection 1001,1002.According to top narration, look ahead when carrying out a section-during exclusive instruction, this extension record logical circuit 1021 its bus units 1030 of order send a multiple data read and an invalidation request 1101, this request and corresponding to the fast line taking of specified quantity in the framework buffer.This multiple request comprises the address of all the fast line takings in this fast line taking section, and its address is the initial appointment of content of framework buffer EAX.Though this bus request 1101 is used the address order that increases progressively, and it should be noted that and considers different traditionally rambus agreements, the present invention also comprises the ordering of successively decreasing, ordering at random, and irregular ordering.First data read and invalidation request 1101 are to send at time point A, and second request 1101 is to send at time point B, and by that analogy, request is to the last sent at time point D.In many kinds of frameworks, bus request 1102 is labeled, so that this request starts from time point C finishing early than its last request.At time point C, having the fast line taking in this section at least is to be used by a storage co-pending.But, for guaranteeing that delay is reduced to minimum, preferably the storage computing of the fast line taking of this section to be put off to time point E, all responses 1102 this moment have all arrived and have been in exclusive state.
If desired fast line taking is to be in exclusive MESI state, then at time point B, a data response request 802, DATA RESP[EAX are sent in these rambus 740 responses] .E, get back to its bus unit 230.At this moment, data that store computing can be read or written to its high-speed cache 222 from its high-speed cache 222, and can not cause the bus Operating Ratio.See also the narration of aforementioned relevant Fig. 8, follow subsequently a data write operation 801 that save command caused, DATA WRITE[EAX], it carries out the phenomenon that does not have Operating Ratio, yet this data write operation 801 is not to result from the execution of exclusive section prefetched instruction.
Now see also Figure 12, it carries out the process flow diagram 1200 that an intention stores the forecasting method of computing for describing according to the present invention.
Flow process starts from square frame 1202, and herein, according to the present invention, a series of macro instruction is sent to an instruction queue.Flow process then proceeds to square frame 1204.
In square frame 1204, a macro instruction is subsequently extracted from this instruction queue, and will deliver to and one extend transfer interpreter.Flow process then proceeds to decisional block 1206.
In decisional block 1206, will carry out an inspection in order to judge whether this macro instruction subsequently is an extension prefetched instruction.If answer is for being that then flow process proceeds to square frame 1208.If answer is that then flow process does not proceed to square frame 1210.
In square frame 1208, one is translated into the microinstruction sequence of looking ahead that an intention stores computing by the extension prefetched instruction that detected, and this microinstruction sequence is in order to command processor go the to look ahead fast line taking of the appointment that is in exclusive state.Flow process then proceeds to square frame 1212.
In square frame 1210, this macro instruction is translated into the microinstruction sequence of a correspondence, and this microinstruction sequence is the computing of going to carry out an appointment in order to command processor.Flow process then proceeds to square frame 1212.
In square frame 1212, a microinstruction sequence subsequently is sent to the execution logic circuit in the microprocessor.Flow process then proceeds to decisional block 1214.
In decisional block 1214, will carry out an inspection in order to judge whether microinstruction sequence subsequently is the microinstruction sequence of looking ahead that an intention stores computing.If answer is for being that then flow process proceeds to square frame 1216.If answer is that then flow process does not proceed to square frame 1218.
In square frame 1216, the response intention stores the microinstruction sequence of looking ahead of computing, and the bus job request is issued to a rambus to require the exclusive entitlement of the fast line taking of an appointment.This fast line taking subsequently is sent to microprocessor with exclusive MESI state, therefore can be to store the delay that state caused that computing is used and can not taken place extremely can be modified because of this fast line taking of lifting.Flow process then proceeds to square frame 1220.
In square frame 1218, carry out this microinstruction sequence subsequently.Flow process then proceeds to square frame 1220.
In square frame 1220, this method is finished.
Now see also Figure 13, it carries out the process flow diagram 1300 that an intention stores the section forecasting method of computing for describing according to the present invention.
Flow process starts from square frame 1302, and herein, according to the present invention, a series of macro instruction is sent to an instruction queue.Flow process then proceeds to square frame 1304.
In square frame 1304, a macro instruction is subsequently extracted from this instruction queue, and will deliver to and one extend transfer interpreter.Flow process then proceeds to decisional block 1306.
In decisional block 1306, will carry out an inspection in order to judge whether this macro instruction subsequently is an extension of section prefetched instruction.If answer is for being that then flow process proceeds to square frame 1310.If answer is that then flow process does not proceed to square frame 1308.
In square frame 1310, one is translated into the microinstruction sequence that section that an intention stores computing is looked ahead by the extension of section prefetched instruction that detected, and this microinstruction sequence is to go to look ahead in order to command processor to being in the fast line taking of exclusive state one specified quantity.Flow process then proceeds to square frame 1312.
In square frame 1308, macro instruction is translated into the microinstruction sequence of a correspondence, and this microinstruction sequence is the computing of going to carry out an appointment in order to command processor.Flow process then proceeds to square frame 1312.
In square frame 1312, a microinstruction sequence subsequently is sent to the execution logic circuit in the microprocessor.Flow process then proceeds to decisional block 1314.
In decisional block 1314, will carry out an inspection in order to judge whether microinstruction sequence subsequently is the microinstruction sequence of looking ahead that an intention stores computing.If answer is for being that then flow process proceeds to square frame 1318.If answer is that then flow process does not proceed to square frame 1316.
In square frame 1316, carry out this microinstruction sequence subsequently.Flow process then proceeds to square frame 1328.
In square frame 1318, response intention stores the microinstruction sequence that this section of computing is looked ahead, the quantity of the initial one temporary transient counter bus operation that to be numerical value 0 will be issued with monitoring, and this bus operation is the exclusive entitlement of the fast line taking of requirement one section.Flow process then proceeds to square frame 1320.
In square frame 1320, the first fast line taking address is assigned to first data read and invalidation bus operation.This first fast line taking address comes from by the specified address of this extension of section prefetched instruction, and then adds that one multiply by the fast line taking line width of the counting of square frame 1318.Because this counting is initially zero, so this first fast line taking address is equal to the specified address of this extension of section prefetched instruction.Flow process then proceeds to square frame 1322.
In square frame 1322, send one via the data read of rambus and invalid operation this is in this first fast line taking of exclusive MESI state to look ahead.Flow process then proceeds to square frame 1324.
In square frame 1324, should count increase after, flow process then proceeds to square frame 1326.
In decisional block 1326, will carry out an inspection and whether be equal to the quantity of prefetched fast line taking in order to the counting of judging this increase, the quantity of this fast line taking line is stored in the shadow buffer (shadowregister).If answer is that then flow process does not proceed to square frame 1320, in wherein carrying out another repetition (iteration) to extract next fast line taking.If counting is equal to the content of this shadow buffer, then send all bus operations, flow process proceeds to square frame 1328 simultaneously.
In square frame 1328, this method is finished.
The present invention can be applied in the problem that solves the high-speed cache that is relevant to the data modification computing widely.The hope that the present invention also is directed to the programmer can instruct a microprocessor to provide different specific embodiments with the exclusive entitlement of the fast line taking that obtains a fast line taking or a section, and this exclusive acquisition of ownership is to be intended to rewrite (overwrite) data to have had this fast line taking of data or the fast line taking of this section in this.Under those situations, do not need fully fast line taking is prefetched to regional high-speed cache, this fast line taking in this zone high-speed cache need only be configured to exclusive state and get final product.The present inventor finds that also many rambus frameworks include a bus operation that can be used to fast line taking is configured to exclusive state, and this bus operation need not have in order to fast line taking data extra transfer to the additional information of regional high-speed cache from internal memory, this bus operation is called the distance of zero mark degree and reads and invalid operation (zero-length read andinvalidate transaction).This distance of zero mark degree reads with invalid job command all bus assemblies that have a specific fast line taking copy its state exchange is become disarmed state, and it is because can be transferred to respond this operation without any data that this reading operation is called as the distance of zero mark degree.When the time point F of the example of Fig. 3 A and 3B because data are read, so can application job 303 to require by the exclusive entitlement of the specified fast line taking of EAX.At the time point A of the example of Fig. 8, and at the time point A of the example of Figure 11 during with B, the content of this specific fast line taking of looking ahead and the exclusive entitlement that obtains this specific fast line taking are necessary.But in following specific embodiments of the invention, because the programmer only is intended to revise the content of those specific fast line takings, so only discussion obtains exclusive proprietorial example.
As previously mentioned, internal memory duplicate computing general be used in now application program, especially for the configuring video impact damper.For the purpose of renovation, on a large scale image show be by several among a small circle image show disposed and formed, when one after the image configurations shown is finished among a small circle, its content promptly is copied to the relative position of this image demonstration on a large scale, and can be, and determine the incremental range size of a level system according to the complexity of this application program.Be familiar with this operator will find that those internal memories duplicate computing can be in a regional microprocessor cache, by very effective execution, because most regional cache structure does not need actual one first part with this high-speed cache to copy to one second of this high-speed cache and partly duplicates to finish an internal memory; And only need change this high-speed cache first address (in other words, coming the source position) partly, make it meet that a destination locations gets final product in the internal memory.This can be carried out the technology that an internal memory duplicates is called " rename " or " computing of renaming ".Therefore, a fast line taking is the address of revising this fast line taking or label pointing to another core position by renaming, and this can't become the content of this fast line taking simultaneously.Specific embodiments of the invention show that the present invention can provide the programmer to order a microprocessor via extending the ability of prefetched instruction so that single fast line taking or the fast line taking of section are renamed, and the execution of this extension prefetched instruction is to be parallel to subsequently program to refer to the execution of giving.
Now see also Figure 14, Figure 14 is a form 1400, and it is for explaining orally the coded system of specifying first entity according to another available extension address of the present invention, and this coding is in order to the configuration in an area data high-speed cache and the fast line taking of renaming.The specific embodiment of a similar coded system that sees also among Fig. 5 to be discussed, for the purpose of the present invention is described, use the ModR/M bit field that meets the x86 framework herein, but, can be contemplated that the present invention comprises the framework that any support is encoded to a configuration and the indication of renaming the mechanism of one extension prefetched instruction 400.Though the example of Figure 14 points to should dispose and is encoded to one with the indication of renaming and extends the address and specify unit 403, be familiar with this operator and will find that this renames to indicate and also can be encoded into the certain operations yardage value that any use in operation code field 401 is instructed.
Expanded the ability of microprocessor 600 among Figure 14 according to the specific embodiment of another selected for use coded system of the present invention, made the exclusive fast line taking of its appointment of being implied, and carry out internal memory thereon and duplicate computing.The configuration of being discussed is very useful with the example of renaming for the execution speed that improves application program herein, because can be excluded from program circuit completely in order to the storage computing of rename a fast line taking or the fast line taking of a section.Therefore, Figure 14 shows a configuration and renames indication, the numerical value 100 of the 5:3 position of the ModR/M of this configuration and the indication use x86 that renames.When this configuration is encoded into a prefetched instruction 400 with the indication of renaming according to the present invention, one microprocessor that conforms to will be sent operation via a rambus by order, this operation is in order to specific fast line taking is configured to be in exclusive MESI state, and need not to extract the data of those fast line takings.Shown in the specific embodiment of discussing among Fig. 5, in foundation x86 specific embodiment of the present invention, these coded systems are legal, and will cause this distance of zero mark degree to read reaching to such an extent that one be in the fast line taking of exclusive state with the invalidation bus operation, and need not to shift its content.
See also Figure 15, it is a block scheme 1500, in order to carry out the cache element that the configuration and one of the computing of renaming are interfaced to internal memory in the microprocessor that is described in Fig. 6.In this specific embodiment, one according to Figure 14 coded configuration and the rename computing of response for an exclusive fast line taking of looking ahead is carried out of renaming and instructing.Block scheme 1500 is presented at and is applied to the logic of carrying out the configuration and the computing of renaming in the microprocessor 1500.The extension transfer interpreter 1510 of microprocessor 1501 receives macro instruction flow process 1502, and this macro instruction flow process 1502 is translated into corresponding micro-order 1511.Micro-order 1511 promptly is sent to one and extends cache element 1520 order is done data load and stored computing to internal memory 1542 after.This extension cache element 1520 comprises extends a cache logic circuitry 1521 and a data cache 1522.This extension cache logic circuitry 1521 is coupled to a bus unit 1530.This bus unit 1530 is to be interfaced to a system memory bus 1540, and this system memory bus 1540 couples with data-carrier store 1542 and other bus assembly 1541 again.
How exemplary process 1502 explanations of macro instruction are specified a configuration and to be renamed computing according to the coding 100 of Figure 14, and how on a purpose zone 1525 of regional high-speed cache 1522, carrying out implicit storage computing, the content of being mated to come source region 1524 with the content of impelling this purpose zone 1525, therefore be accomplished an internal memory and duplicate, or finish a computing of renaming in regional high-speed cache 1522 that is parallel to the execution of programmed instruction subsequently.
Therefore, transfer interpreter 1510 is with a MOV EDI, #DEST macro instruction 1502 is translated into a LDEDI, the #DEST micro-order, this micro-order is loaded into framework buffer EDI 1505 in order to command processor 1501 with numerical value DEST, wherein, " DEST " be the address of one first fast line taking, the address of this first fast line taking is in order to the content replication of one second fast line taking SRC thereon.This LD EDI, #DEST is sent to the actuating logic (not shown), and this execution logic circuit is loaded into EDI 1505 with DEST.And then, this transfer interpreter 1510 is with an extension configuration and the instruction 1502 of renaming in this exemplary process, PREF.R[SRC], be translated into configuration and rename micro-order 1511, PREF.R[SRC], this micro-order is the exclusive access right that command processor 1501 is obtained a fast line taking, the address of this fast line taking is specified by buffer EDI1505, and in this zone high-speed cache 1522, carry out one the computing of renaming, so that fast line taking DEST 1525 is arrived in the content replication of fast line taking SRC 1524.Select for use in the specific embodiment one, this extension configuration and the instruction 1502 of renaming, PREF.R[SRC], the micro-order 1511 corresponding with it is that (in other words this microprocessor 1501 of order further writes back the content of fast line taking SRC 1524, promptly remove), to discharge the resource of data cache.This selects specific embodiment for use is that configuration becomes can the visitor to overcome uses " nearest minimum by the user " (least-recently-used, LRU) algorithm is as the defective of management of cache, the defective of this lru algorithm is when programmer intention is disengaged the resource of fast line taking SRC 1524, and this algorithm can refer to that it is violated and recently minimumly be used rule and can't disengage.
This configuration is sent to it then with the micro-order 1511 of renaming and extends cache element 1520.Extend in the cache element 1520 at this, extension cache logic circuitry 1521 command line unit 1530 send the distance of zero mark degree via system memory bus 1540 and read and invalid operation, to obtain the exclusive entitlement of requirement from the fast line taking of internal memory 1542.After the exclusive entitlement of the fast line taking that obtains this quilt, this extension cache logic circuitry 1521 promptly orders this data cache source region SRC in 1522 future 1524 to rename the specified destination address of EDI1505 as, and indicates that its MESI state is for revising.Another specific embodiment of the present invention is then considered one to duplicate computing in this high-speed cache 1522, this duplicates computing for after obtaining the exclusive entitlement of this DEST, the actual computing that DEST 1525 is arrived in the content replication of SRC 1524, rather than the above-mentioned computing of renaming.In this specific embodiment, after computing is finished in case this is renamed, this extension cache logic circuitry 1521 is that command line unit 1530 sends the data write operation via system memory bus 1540, to remove the content of (flush) SRC 1524, therefore be released out the storage area in this high-speed cache 1522.
See also Figure 16, it is for describing the process flow diagram 1600 of one fast line taking being carried out the configuration and the operational method of renaming according to the present invention.
Flow process starts from square frame 1602, and herein, according to the present invention, a series of macro instruction is sent to an instruction queue.Flow process then proceeds to square frame 1604.
In square frame 1604, a macro instruction is subsequently extracted from this instruction queue, and is sent to an extension transfer interpreter.Flow process then proceeds to decisional block 1606.
In decisional block 1606, will carry out an inspection in order to judging that whether this macro instruction subsequently is one to extend prefetched instruction, and this to extend prefetched instruction be to be encoded into the configuration and the computing of renaming are carried out in a fast line taking.If answer is for being that then flow process proceeds to square frame 1608.If answer is that then flow process does not proceed to square frame 1610.
In square frame 1608, the extension prefetched instruction that this detects is translated into a configuration and renames instruction sequence, this instruction sequence in order to command processor reach an exclusive entitlement of specifying the first fast line taking, and when obtaining this exclusive entitlement, the content replication that is about to one second fast line taking is to this first fast line taking.The address of this first fast line taking is a framework buffer that stores microprocessor.This configuration optionally orders this microprocessor to go to remove the content of this second fast line taking with the microinstruction sequence of renaming.Flow process then proceeds to square frame 1612.
In square frame 1610, this macro instruction is translated into the microinstruction sequence of a correspondence, and this microinstruction sequence goes to carry out the computing of an appointment in order to command processor.Flow process then proceeds to square frame 1612.
In square frame 1612, a microinstruction sequence subsequently is sent to the execution logic circuit in the microprocessor.Flow process then proceeds to decisional block 1614.
In decisional block 1614, will carry out an inspection in order to judge whether microinstruction sequence subsequently is a configuration and the sequence of renaming.If answer is for being that then flow process proceeds to square frame 1616.If answer is that then flow process does not proceed to square frame 1618.
In square frame 1616, respond this configuration and the sequence of the computing of renaming, the bus job request is issued to a rambus to require the exclusive entitlement of the fast line taking of an appointment.Respond this operation subsequently, can obtain the exclusive access right of this fast line taking via this bus.Flow process then proceeds to square frame 1620.
In square frame 1620, this extends the record logical circuit and promptly according to the present invention this first fast line taking (DEST) is arrived in the content replication of this second fast line taking (SRC), or the cache tag of this second fast line taking is renamed as the label of the first corresponding fast line taking of a sensing.Flow process then optionally proceeds to and selects square frame 1622 for use, or flow process is directly to square frame 1624.
In square frame 1618, carry out this microinstruction sequence subsequently.Flow process then proceeds to square frame 1624.
In selecting square frame 1622 for use, remove the content of this second fast line taking DEST, so that being released, it gets back to internal memory.Flow process then proceeds to square frame 1624.
In square frame 1624, this method is finished.
Figure 17 is a block scheme 1700, in order to carry out the cache element that the section configuration and one of the computing of renaming are interfaced to internal memory in the microprocessor 1701 that is described in Fig. 6.The authentication of the assembly in the microprocessor 1701 of Figure 17 and computing are similar to the similar assembly in the microprocessor 1501 of Figure 15, and hundred figure place figure numbers, 5 usefulness 7 that need only Figure 15 replace.According to the present invention to improve the configuration of this section and to rename computing, the present invention uses an extension transfer interpreter 1710 to have the extension configuration of repetition preamble 1702 and the instruction of renaming to be translated into microinstruction sequence 1711 just like Fig. 9 is described, and this microinstruction sequence 1711 is to look ahead and the initialization computing in order to the command execution section.In addition, also use a shadow counter buffer 1713, in order to load from the number count of framework buffer 1712 ECX configuration with the fast line taking of renaming.And use extension of section cache logic circuitry 1721 its bus units 1730 of order to require an exclusive entitlement of specifying the fast line taking DEST 1725 of section, and after obtaining this exclusive entitlement, promptly to those fast line takings in data cache 1722 computing of renaming, so that its content can be mated the originate content of fast line taking SRC1724 of section, and duplicate after computing finishes at internal memory, can optionally order this bus unit to remove the originate content of fast line taking SRC 1724 of this section, be back to internal memory 1742 so that it is released.
Be the initial section configuration and the computing of renaming, first macro instruction 1002, MOVECX, #COUNT is the number count initialization in order to the fast line taking in the section 1725 that will will dispose in the framework buffer ECX and rename.Extend transfer interpreter 1710 this first macro instruction is translated into loading micro-order 1711, LD ECX, #COUNT, this micro-order command processor 1701 will count and load ECX 1712.After this counting was loaded into ECX 1712, this counting was also by transparent shadow counter buffer 1713, the SHECX of copying to.Simultaneously, 1702 of other instructions can be revised the content of its framework buffer 1712 under the situation of the counting that does not interfere with the configuration and the computing of renaming.
After counting is initialised, second macro instruction 1702, MOV EDI, #DEST are sent to transfer interpreter 1710, this macro instruction 1702 be in order to command processor 1701 with an assigned address, DEST is loaded into framework buffer EDI 1705.Transfer interpreter 1710 is translated into one with second macro instruction 1702 and loads micro-order, LD EDI, and the #DEST micro-order, this micro-order is loaded into EDI 1705 in order to command processor 1701 with numerical value DEST.
After numerical value numerical value DEST is loaded into EDI 1705, and then, this extension transfer interpreter 1710 is translated the section configuration and the instruction 1702 of renaming, REP.PREF.R[SRC], this designated command microprocessor 1701 will be disposed by the fast line taking that is in exclusive state of the specified quantity of ECX and rename regional high-speed cache, and this configuration comprises that with the fast line taking of the regional high-speed cache of renaming those are by the specified fast line taking of the content of EDI 1705.With the microinstruction sequence 1711 of this exclusive area configurations of response command with the computing of renaming, these extension of section cache logic circuitry 1721 its bus units 1730 of order require to comprise that the address is the exclusive entitlement of those fast line takings of DEST via its rambus 1740, and obtain this exclusive proprietorial after, this extension of section cache logic circuitry 1721 is about to each and will be configured to data cache 1722 by its fast line taking of counting, and the fast line taking that the content replication of address SRC 1724 is counted to those.Finish this configuration and renaming after the computing, also optionally the content of address SRC 1724 is being removed, giving internally cached to discharge resource.
Now see also Figure 18, it is for describing the process flow diagram 1800 of carrying out the cached data section configuration and the operational method of renaming according to the present invention.
Flow process starts from square frame 1802, and herein, according to the present invention, a series of macro instruction is sent to an instruction queue.Flow process then proceeds to square frame 1804.
In square frame 1804, a macro instruction is subsequently extracted from this instruction queue, and is sent to an extension transfer interpreter.Flow process then proceeds to decisional block 1806.
In decisional block 1806, will carry out an inspection in order to judge whether this macro instruction subsequently is the section configuration and the instruction of renaming.If answer is for being that then flow process proceeds to square frame 1810.If answer is that then flow process does not proceed to square frame 1808.
In square frame 1810, section configuration on detection of is translated into section configuration and renames microinstruction sequence with the instruction of renaming, this microinstruction sequence be in order to command processor reach the exclusive entitlement of the first fast line taking of a specified quantity, and the content modification of a destination address of this first fast line taking become identical with the originate content of address of one of the second fast line taking.Flow process then proceeds to square frame 1812.
In square frame 1808, this macro instruction is translated into the microinstruction sequence of a correspondence, and this microinstruction sequence goes to carry out the computing of an appointment in order to command processor.Flow process then proceeds to square frame 1812.
In square frame 1812, a microinstruction sequence subsequently is sent to the execution logic circuit in the microprocessor.Flow process then proceeds to decisional block 1814.
In decisional block 1814, will carry out an inspection in order to judge whether this microinstruction sequence subsequently is the section configuration and the microinstruction sequence of renaming.If answer is for being that then flow process proceeds to square frame 1818.If answer is that then flow process does not proceed to square frame 1816.
In square frame 1816, carry out this microinstruction sequence subsequently.Flow process then proceeds to square frame 1830.
In square frame 1818, for responding this section configuration and renaming microinstruction sequence, a temporary transient counter is initialized to value of zero will be by the quantity of fast line taking in exclusive configuration and the fast line taking of a section of renaming to monitor this.Flow process then proceeds to square frame 1820.
In square frame 1820, the first source fast line taking address SRCADDR and the fast line taking address D of first purpose STADDR are specified with the computing of renaming by one first configuration.The fast line taking address D of this first purpose STADDR comes from the content of a pre-loaded framework buffer, adds a fast line taking line width that multiply by the counting of square frame 1818.Because this counting is initially zero, so this first fast line taking address is equal to the specified address of this framework buffer.Flow process then proceeds to square frame 1822.
In square frame 1822, send a data read and invalid operation via rambus, in order to being that the first fast line taking of DSTADDR is configured to be in exclusive MESI state with this address.Flow process then proceeds to square frame 1824.
In square frame 1824, the first fast line taking that this address that is configured to be in exclusive MESI state is DSTADDR will be modified to comprise that this address is the content of the first fast line taking in source of SRCADDR, in a specific embodiment of the present invention, the content of SRCADDR is to be copied to this DSTADDR.In another specific embodiment of the present invention, DSTADDR renames SRCADDR as.In another selectivity specific embodiment of the present invention, the content of SRCADDR further is eliminated so that it is released and gets back to internal memory.Flow process then proceeds to square frame 1826.
In square frame 1826, should count increase after, flow process then proceeds to square frame 1828.
In decisional block 1828, will carry out an inspection in order to judge whether the counting after this increase is equal to the quantity of the fast line taking that is configured and renames, and the quantity of this fast line taking is to be stored in the shadow buffer.If answer is that then flow process does not proceed to square frame 1820, in carrying out another iteration (iteration) again with the configuration and next the fast line taking of renaming herein.If counting is equal to the content of this shadow buffer, then all configurations of the fast line taking of this section all to be finished with the computing of renaming, flow process proceeds to square frame 1830.
In square frame 1830, this method is finished.
Though the present invention and purpose thereof, feature and advantage are described in detail, other specific embodiment is still contained within the scope of the invention.For example, when this had described according to the MESI agreement, the present invention stored the advantage that computing had for (eliminating) of storage computing co-pending or elimination.Select MESI to be because it is in the art widespread use as example of the present invention.But, it should be noted that to the invention provides the regional high-speed cache of the data to of looking ahead that its form or state make these data to be modified immediately, and need not send operation to rambus.Whether this form or state be unimportant according to MESI.
As previously mentioned, different frameworks use different fast line taking line width.In the type computer system, generally use the fast line taking of 32-byte now on the table.But the narration of Shi Yonging does not in the present invention limit the present invention and is applied to 32-, 64-, 128-, or the fast line taking of 256-byte even.Opposite, the present invention's expection can be applicable to any system architecture that limits the modification of its fast line taking in zone and these fast line takings of directly looking ahead are not provided, and must not rely on the bus operation to revise permission to obtain to such an extent as to this fast line taking can be modified immediately.
In addition, the present invention uses and is coincident with the specific embodiment of x86 framework as illustration.Undoubtedly, the microprocessor of x86-compatibility can obtain an advantage from using the present invention, but it should be noted that application category of the present invention has more than the environment that is limited to the x86-compatibility, because still have the applied prefetched instruction of many different frameworks can not guarantee that its result is exclusive data.
At last, though it should be noted that using an address herein specifies unit to be configured with appointment and the address of the fast line taking of renaming, this kind specific mode need not be (explicit) that shows.(implicitly) appointment one that the specific embodiment of a foundation configuration of the present invention and the instruction of renaming can imply comprises the framework buffer of this address, is by a previous instruction of carrying out this address to be loaded into this framework buffer herein.
In a word, the above only is preferred embodiment of the present invention, when not limiting the scope that the present invention is implemented with this.All equalizations of doing according to claim of the present invention change and modify, and all should belong in the scope that patent of the present invention contains.
Claims (22)
1. a micro processor, apparatus duplicates computing in order to carry out a section internal memory, and this device comprises:
One translation logic circuit, be configured to the section configuration and the instruction of renaming are translated into a microinstruction sequence, in order to ordering a microprocessor that one fast line taking first section is configured to an exclusive state, and with the content replication of fast line taking second section to this fast line taking first section; And
One execution logic circuit, it is coupled to this translation logic circuit, be configured to receive this microinstruction sequence, and send operation to a rambus of this fast line taking first section that requires to be in this exclusive state, and content replication that will this fast line taking second section is to this fast line taking first section.
2. micro processor, apparatus as claimed in claim 1, wherein this section configuration and the instruction of renaming comprise the modification to the existing prefetched instruction in an existing instruction set, and wherein this existing prefetched instruction does not provide the configuration of this fast line taking first section originally and renames.
3. micro processor, apparatus as claimed in claim 2, wherein this existing instruction set comprises the x86 instruction set, and wherein this existing prefetched instruction comprises the prefetched instruction of this x86.
4. micro processor, apparatus as claimed in claim 2, wherein this section configuration is included in an extension prefetched instruction with the instruction of renaming, wherein one of this operation code field of looking ahead specify this microprocessor of numerical value order that the fast line taking of one first purpose is configured to this exclusive state, and with the content replication of the one first fast line taking in source to the fast line taking of this first purpose, and wherein other this microprocessor of numerical value order of this operation code field of looking ahead is carried out the computing of looking ahead of other type according to this existing instruction set.
5. micro processor, apparatus as claimed in claim 4, wherein this operation code field of looking ahead is included in the interior 5:3 position of ModR/M byte of an x86 prefetched instruction.
6. micro processor, apparatus as claimed in claim 5, the wherein fast line taking of this microprocessor configuration of this repetition preamble field order and the specified quantity of renaming, and wherein the fast line taking of this specified quantity comprises the fast line taking of this first purpose.
7. micro processor, apparatus as claimed in claim 6, wherein this specified quantity is specified by the content of the framework buffer in this microprocessor.
8. micro processor, apparatus as claimed in claim 1, wherein, for responding this microinstruction sequence, this execution logic circuit order one bus unit sends those operations via this rambus.
9. micro processor, apparatus as claimed in claim 7, wherein those operations comprise a multiple distance of zero mark degree and read and invalid operation the exclusive entitlement of this fast line taking first section of those job requirements.
10. micro processor, apparatus as claimed in claim 1, wherein also comprise a framework buffer, this framework buffer contents comprises the fast line taking quantity of this fast line taking first section, this fast line taking quantity is transparent copies to a shadow buffer, and this execution logic circuit is used this shadow buffer and duplicated computing to carry out this section internal memory herein.
11. micro processor, apparatus as claimed in claim 1, wherein this execution logic circuit is removed the content of this fast line taking second section, gets back to internal memory should fast line taking second section to release.
12. the device in the microprocessor, in order to carry out the section computing of renaming, this device comprises:
The configuration of one section and the instruction of renaming, configuration becomes command processor that one fast line taking first section is configured to an exclusive state, and this fast line taking first section is arrived in the content replication of fast line taking second section; And
One transfer interpreter, be configured to receive this section configuration and rename instruction, and the configuration of this section is translated into relevant micro-order with renaming to instruct, should the relevant execution logic circuit of micro-order order in microprocessor send multiple bus operation therein via a rambus, in order to requiring the exclusive entitlement of this fast line taking first section, and content replication that will this fast line taking second section is to this fast line taking first section.
13. the device in the microprocessor as claimed in claim 12, wherein the configuration of this section and the instruction of renaming comprise the modification to the existing prefetched instruction in an existing instruction set, and the computing of renaming of this section wherein should existing prefetched instruction script be provided.
14. the device in the microprocessor as claimed in claim 12, wherein, for responding this relevant micro-order, this execution logic circuit order one bus unit sends those bus operations via this rambus.
15. the device in the microprocessor as claimed in claim 14, wherein those bus operations comprise a multiple distance of zero mark degree and read and invalid operation.
16. the device in the microprocessor as claimed in claim 12, wherein also comprise a framework buffer, this framework buffer contents comprises the fast line taking quantity of this fast line taking first section, this fast line taking quantity is the transparent shadow buffer that copies to, and this execution logic circuit is used this shadow buffer and duplicated computing to carry out this section internal memory herein.
17. the method that internal memory duplicates comprises:
Extract section configuration and the macro instruction of renaming;
This section configuration is translated into microinstruction sequence with the macro instruction of renaming, wherein this microinstruction sequence is configured to exclusive state in order to order a microprocessor with fast line taking first section, and this fast line taking first section is arrived in the content replication of fast line taking second section; And
Respond this microinstruction sequence, send the bus operation via rambus and should be configured to exclusive state by fast line taking first section, and content replication that will this fast line taking second section is to this fast line taking first section.
18. method as claimed in claim 17, wherein this sends to move and comprises:
Activating this microprocessor carries out this abreast and sends action and subsequently instruction.
19. method as claimed in claim 17, wherein this extraction comprises:
This section configuration and the modification of instruction conduct to the existing prefetched instruction in an existing instruction set of renaming are provided, wherein this existing prefetched instruction does not provide this fast line taking first section is configured to exclusive state originally, and this fast line taking first section is arrived in content replication that will this fast line taking second section.
20. method as claimed in claim 17, wherein this sends to move and comprises;
Provide via one of rambus multiple distance of zero mark degree and read and invalid operation, the exclusive entitlement of this fast line taking first section of those job requirements.
21. method as claimed in claim 17 also comprises:
The transparent content of duplicating a framework buffer is to a shadow buffer, and this framework buffer comprises the fast line taking quantity that this section internal memory duplicates computing.
22. method as claimed in claim 17, wherein this replication actions comprises:
Remove the content of this fast line taking second section, get back to internal memory should fast line taking second section to release.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/405,980 | 2003-04-02 | ||
US10/405,980 US7111125B2 (en) | 2002-04-02 | 2003-04-02 | Apparatus and method for renaming a data block within a cache |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1514374A CN1514374A (en) | 2004-07-21 |
CN100461135C true CN100461135C (en) | 2009-02-11 |
Family
ID=32850634
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB200410001593XA Expired - Lifetime CN100461135C (en) | 2003-04-02 | 2004-01-14 | Method and device for changing high speed slow storage data sector |
Country Status (4)
Country | Link |
---|---|
US (1) | US7111125B2 (en) |
EP (1) | EP1465077B1 (en) |
CN (1) | CN100461135C (en) |
TW (1) | TWI257067B (en) |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6658552B1 (en) * | 1998-10-23 | 2003-12-02 | Micron Technology, Inc. | Processing system with separate general purpose execution unit and data string manipulation unit |
US20070038984A1 (en) * | 2005-08-12 | 2007-02-15 | Gschwind Michael K | Methods for generating code for an architecture encoding an extended register specification |
US9367465B2 (en) * | 2007-04-12 | 2016-06-14 | Hewlett Packard Enterprise Development Lp | Method and system for improving memory access performance |
US8122195B2 (en) | 2007-12-12 | 2012-02-21 | International Business Machines Corporation | Instruction for pre-fetching data and releasing cache lines |
US9164690B2 (en) * | 2012-07-27 | 2015-10-20 | Nvidia Corporation | System, method, and computer program product for copying data between memory locations |
US9547553B1 (en) | 2014-03-10 | 2017-01-17 | Parallel Machines Ltd. | Data resiliency in a shared memory pool |
US9781027B1 (en) | 2014-04-06 | 2017-10-03 | Parallel Machines Ltd. | Systems and methods to communicate with external destinations via a memory network |
US9690713B1 (en) | 2014-04-22 | 2017-06-27 | Parallel Machines Ltd. | Systems and methods for effectively interacting with a flash memory |
US9594688B1 (en) | 2014-12-09 | 2017-03-14 | Parallel Machines Ltd. | Systems and methods for executing actions using cached data |
US9529622B1 (en) | 2014-12-09 | 2016-12-27 | Parallel Machines Ltd. | Systems and methods for automatic generation of task-splitting code |
US9781225B1 (en) | 2014-12-09 | 2017-10-03 | Parallel Machines Ltd. | Systems and methods for cache streams |
US9753873B1 (en) | 2014-12-09 | 2017-09-05 | Parallel Machines Ltd. | Systems and methods for key-value transactions |
US9632936B1 (en) | 2014-12-09 | 2017-04-25 | Parallel Machines Ltd. | Two-tier distributed memory |
US9639473B1 (en) | 2014-12-09 | 2017-05-02 | Parallel Machines Ltd. | Utilizing a cache mechanism by copying a data set from a cache-disabled memory location to a cache-enabled memory location |
TWI590053B (en) * | 2015-07-02 | 2017-07-01 | 威盛電子股份有限公司 | Selective prefetching of physically sequential cache line to cache line that includes loaded page table |
US10067713B2 (en) | 2015-11-05 | 2018-09-04 | International Business Machines Corporation | Efficient enforcement of barriers with respect to memory move sequences |
US10152322B2 (en) * | 2015-11-05 | 2018-12-11 | International Business Machines Corporation | Memory move instruction sequence including a stream of copy-type and paste-type instructions |
US10042580B2 (en) | 2015-11-05 | 2018-08-07 | International Business Machines Corporation | Speculatively performing memory move requests with respect to a barrier |
US9996298B2 (en) | 2015-11-05 | 2018-06-12 | International Business Machines Corporation | Memory move instruction sequence enabling software control |
US10126952B2 (en) | 2015-11-05 | 2018-11-13 | International Business Machines Corporation | Memory move instruction sequence targeting a memory-mapped device |
US10346164B2 (en) | 2015-11-05 | 2019-07-09 | International Business Machines Corporation | Memory move instruction sequence targeting an accelerator switchboard |
US10241945B2 (en) | 2015-11-05 | 2019-03-26 | International Business Machines Corporation | Memory move supporting speculative acquisition of source and destination data granules including copy-type and paste-type instructions |
US10331373B2 (en) * | 2015-11-05 | 2019-06-25 | International Business Machines Corporation | Migration of memory move instruction sequences between hardware threads |
US10140052B2 (en) | 2015-11-05 | 2018-11-27 | International Business Machines Corporation | Memory access in a data processing system utilizing copy and paste instructions |
CN111782273B (en) * | 2020-07-16 | 2022-07-26 | 中国人民解放军国防科技大学 | Software and hardware cooperative cache device for improving repeated program execution performance |
CN112286577B (en) * | 2020-10-30 | 2022-12-06 | 上海兆芯集成电路有限公司 | Processor and operating method thereof |
US12050915B2 (en) * | 2020-12-22 | 2024-07-30 | Intel Corporation | Instruction and logic for code prefetching |
US11853237B2 (en) * | 2021-11-19 | 2023-12-26 | Micron Technology, Inc. | Input/output sequencer instruction set processing |
CN117640262B (en) * | 2024-01-26 | 2024-04-09 | 杭州美创科技股份有限公司 | Data asset isolation method, device, computer equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1014189B (en) * | 1987-11-18 | 1991-10-02 | 国际商用机器公司 | Control machinery of bus flowchart |
CN1074771A (en) * | 1992-01-23 | 1993-07-28 | 英特尔公司 | The microprocessor that has the device of parallel execution of instructions |
CN1099492A (en) * | 1993-07-06 | 1995-03-01 | 协力计算机股份有限公司 | Processor interface chip for dual-microprocessor processor system |
US5555400A (en) * | 1992-09-24 | 1996-09-10 | International Business Machines Corporation | Method and apparatus for internal cache copy |
US5694564A (en) * | 1993-01-04 | 1997-12-02 | Motorola, Inc. | Data processing system a method for performing register renaming having back-up capability |
CN1247608A (en) * | 1997-02-27 | 2000-03-15 | 国际商业机器公司 | Transformational raid for hierarchical storage management system |
CN1349160A (en) * | 2001-11-28 | 2002-05-15 | 中国人民解放军国防科学技术大学 | Correlation delay eliminating method for streamline control |
US6438659B1 (en) * | 1997-12-31 | 2002-08-20 | Unisys Corporation | Directory based cache coherency system supporting multiple instruction processor and input/output caches |
Family Cites Families (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4959777A (en) * | 1987-07-27 | 1990-09-25 | Motorola Computer X | Write-shared cache circuit for multiprocessor system |
EP0375883A3 (en) * | 1988-12-30 | 1991-05-29 | International Business Machines Corporation | Cache storage system |
JP2500101B2 (en) * | 1992-12-18 | 1996-05-29 | インターナショナル・ビジネス・マシーンズ・コーポレイション | How to update the value of a shared variable |
US5903911A (en) * | 1993-06-22 | 1999-05-11 | Dell Usa, L.P. | Cache-based computer system employing memory control circuit and method for write allocation and data prefetch |
JPH09205342A (en) | 1996-01-26 | 1997-08-05 | Matsushita Electric Ind Co Ltd | Surface acoustic wave filter |
US5892970A (en) * | 1996-07-01 | 1999-04-06 | Sun Microsystems, Inc. | Multiprocessing system configured to perform efficient block copy operations |
US5966734A (en) * | 1996-10-18 | 1999-10-12 | Samsung Electronics Co., Ltd. | Resizable and relocatable memory scratch pad as a cache slice |
US6018763A (en) * | 1997-05-28 | 2000-01-25 | 3Com Corporation | High performance shared memory for a bridge router supporting cache coherency |
US5944815A (en) * | 1998-01-12 | 1999-08-31 | Advanced Micro Devices, Inc. | Microprocessor configured to execute a prefetch instruction including an access count field defining an expected number of access |
US6014735A (en) | 1998-03-31 | 2000-01-11 | Intel Corporation | Instruction set extension using prefixes |
US6088789A (en) * | 1998-05-13 | 2000-07-11 | Advanced Micro Devices, Inc. | Prefetch instruction specifying destination functional unit and read/write access mode |
US6253306B1 (en) * | 1998-07-29 | 2001-06-26 | Advanced Micro Devices, Inc. | Prefetch instruction mechanism for processor |
US6289420B1 (en) * | 1999-05-06 | 2001-09-11 | Sun Microsystems, Inc. | System and method for increasing the snoop bandwidth to cache tags in a multiport cache memory subsystem |
US6266744B1 (en) * | 1999-05-18 | 2001-07-24 | Advanced Micro Devices, Inc. | Store to load forwarding using a dependency link file |
US6470444B1 (en) * | 1999-06-16 | 2002-10-22 | Intel Corporation | Method and apparatus for dividing a store operation into pre-fetch and store micro-operations |
US6557084B2 (en) * | 1999-07-13 | 2003-04-29 | International Business Machines Corporation | Apparatus and method to improve performance of reads from and writes to shared memory locations |
US6460132B1 (en) * | 1999-08-31 | 2002-10-01 | Advanced Micro Devices, Inc. | Massively parallel instruction predecoding |
JP2001222466A (en) * | 2000-02-10 | 2001-08-17 | Nec Corp | Multiprocessor system, shared memory control system, its method, and recording medium |
US6751710B2 (en) * | 2000-06-10 | 2004-06-15 | Hewlett-Packard Development Company, L.P. | Scalable multiprocessor system and cache coherence method |
US6845008B2 (en) * | 2001-03-30 | 2005-01-18 | Intel Corporation | Docking station to cool a notebook computer |
US6842830B2 (en) * | 2001-03-31 | 2005-01-11 | Intel Corporation | Mechanism for handling explicit writeback in a cache coherent multi-node architecture |
US6915415B2 (en) * | 2002-01-07 | 2005-07-05 | International Business Machines Corporation | Method and apparatus for mapping software prefetch instructions to hardware prefetch logic |
US7080211B2 (en) * | 2002-02-12 | 2006-07-18 | Ip-First, Llc | Microprocessor apparatus and method for prefetch, allocation, and initialization of a cache line from memory |
US7380103B2 (en) * | 2002-04-02 | 2008-05-27 | Ip-First, Llc | Apparatus and method for selective control of results write back |
US6832296B2 (en) | 2002-04-09 | 2004-12-14 | Ip-First, Llc | Microprocessor with repeat prefetch instruction |
-
2003
- 2003-04-02 US US10/405,980 patent/US7111125B2/en not_active Expired - Lifetime
- 2003-10-20 TW TW092128965A patent/TWI257067B/en not_active IP Right Cessation
- 2003-12-15 EP EP03257873.4A patent/EP1465077B1/en not_active Expired - Lifetime
-
2004
- 2004-01-14 CN CNB200410001593XA patent/CN100461135C/en not_active Expired - Lifetime
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1014189B (en) * | 1987-11-18 | 1991-10-02 | 国际商用机器公司 | Control machinery of bus flowchart |
CN1074771A (en) * | 1992-01-23 | 1993-07-28 | 英特尔公司 | The microprocessor that has the device of parallel execution of instructions |
US5555400A (en) * | 1992-09-24 | 1996-09-10 | International Business Machines Corporation | Method and apparatus for internal cache copy |
US5694564A (en) * | 1993-01-04 | 1997-12-02 | Motorola, Inc. | Data processing system a method for performing register renaming having back-up capability |
CN1099492A (en) * | 1993-07-06 | 1995-03-01 | 协力计算机股份有限公司 | Processor interface chip for dual-microprocessor processor system |
CN1247608A (en) * | 1997-02-27 | 2000-03-15 | 国际商业机器公司 | Transformational raid for hierarchical storage management system |
US6438659B1 (en) * | 1997-12-31 | 2002-08-20 | Unisys Corporation | Directory based cache coherency system supporting multiple instruction processor and input/output caches |
CN1349160A (en) * | 2001-11-28 | 2002-05-15 | 中国人民解放军国防科学技术大学 | Correlation delay eliminating method for streamline control |
Also Published As
Publication number | Publication date |
---|---|
EP1465077A3 (en) | 2007-12-05 |
EP1465077B1 (en) | 2018-05-09 |
TWI257067B (en) | 2006-06-21 |
US7111125B2 (en) | 2006-09-19 |
TW200421175A (en) | 2004-10-16 |
US20030229763A1 (en) | 2003-12-11 |
EP1465077A2 (en) | 2004-10-06 |
CN1514374A (en) | 2004-07-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100461135C (en) | Method and device for changing high speed slow storage data sector | |
CN100504815C (en) | Device and method for changing name against high speed slow storage boundary | |
EP1447741B1 (en) | Cache data block allocation and initialization mechanism | |
CN101950259B (en) | Device,system and method for executing affairs | |
US7194597B2 (en) | Method and apparatus for sharing TLB entries | |
EP0674270B1 (en) | Input/output address translation mechanisms | |
JP3963372B2 (en) | Multiprocessor system | |
US8521964B2 (en) | Reducing interprocessor communications pursuant to updating of a storage key | |
EP1447746B1 (en) | Write back and invalidate mechanism for multiple cache lines | |
JP3531167B2 (en) | System and method for assigning tags to instructions to control instruction execution | |
CN101410797A (en) | Transactional memory in out-of-order processors | |
CN101097544A (en) | Global overflow method for virtualized transactional memory | |
EP3619615B1 (en) | An apparatus and method for managing capability metadata | |
TW200424867A (en) | Dynamic data routing mechanism for a high speed memory cloner | |
EP1447743B1 (en) | Apparatus and method for allocation and initialization of a cache line | |
EP1447744B1 (en) | Exclusive prefetch of a block of data from memory | |
EP1447745B1 (en) | Prefetch with intent to store mechanism | |
EP1465059B1 (en) | Apparatus and method for renaming a cache line | |
Kumar et al. | {HP} Scalable Computing Architecture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CX01 | Expiry of patent term | ||
CX01 | Expiry of patent term |
Granted publication date: 20090211 |