WO2012047833A1 - Method and apparatus for floating point register caching - Google Patents
Method and apparatus for floating point register caching Download PDFInfo
- Publication number
- WO2012047833A1 WO2012047833A1 PCT/US2011/054688 US2011054688W WO2012047833A1 WO 2012047833 A1 WO2012047833 A1 WO 2012047833A1 US 2011054688 W US2011054688 W US 2011054688W WO 2012047833 A1 WO2012047833 A1 WO 2012047833A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- registers
- architected
- physical
- mapped
- instructions
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/30098—Register arrangements
- G06F9/3012—Organisation of register space, e.g. banked or distributed register file
- G06F9/3013—Organisation of register space, e.g. banked or distributed register file according to data content, e.g. floating-point registers, address registers
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3836—Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
- G06F9/3838—Dependency mechanisms, e.g. register scoreboarding
- G06F9/384—Register renaming
Definitions
- This invention relates generally to processor-based systems, and, more particularly, to register caching in processor-based systems.
- processor-based systems typically include one or more processing elements such as a central processing unit (CPU), a graphical processing unit (GPU), an accelerated processing unit (APU), and the like.
- the processing units include one or more processor cores that are configured to access instructions and/or data that are stored in a main memory and then execute the instructions and/or manipulate the data.
- Each processor core includes a floating point unit that is used to perform mathematical operations on floating point numbers when required by the executed instructions.
- conventional floating-point units are typically designed to carry out operations such as addition, subtraction, multiplication, division, and square root.
- Some systems can also perform various transcendental functions such as exponential or trigonometric calculations.
- Floating-point operations may be handled separately from integer operations on integer numbers.
- the floating-point unit may also have a set of dedicated floating-point registers for storing floating-point numbers.
- Floating-point units can support multiple floating-point instruction sets.
- the x86 architecture instruction set includes a floating-point related subset of instructions that is referred to as x87.
- the x87 instruction set includes instructions for basic floating point operations such as addition, subtraction and comparison, as well as for more complex numerical operations such as the tangent and arc -tangent functions.
- Floating-point instructions in the x87 instruction set can use a set of architected registers (conventionally known as MMX registers) that can be mapped to physical registers in the floating-point unit.
- computers that include multiple processing cores may support a single instruction, multiple data (SIMD) instruction set.
- SIMD single instruction, multiple data
- the x86 architecture SIMD instruction set supports another floating-point related subset of instructions that are referred to as Streaming SIMD Extensions (SSE).
- SSE Streaming SIMD Extensions
- Floating-point instructions in the SSE instruction set can use another set of architected registers (conventionally known as XMM registers) that can also be mapped to physical registers in the floating-point unit.
- architected registers for both instruction sets are mapped to physical registers in the floating-point unit so that both sets of architected registers are available to the applications running on the processing unit. Mapping architected registers for both instruction sets to physical registers in the floating-point unit consumes area on the chip, timing resources, and power. Depending on the instruction sets used by different applications, the resources that are allocated to the different types of instruction sets may not be used, thereby reducing the efficiency of the processing unit.
- the disclosed subject matter is directed to addressing the effects of one or more of the problems set forth above.
- the following presents a simplified summary of the disclosed subject matter in order to provide a basic understanding of some aspects of the disclosed subject matter. This summary is not an exhaustive overview of the disclosed subject matter. It is not intended to identify key or critical elements of the disclosed subject matter or to delineate the scope of the disclosed subject matter. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is discussed later.
- a method for floating-point register caching.
- One embodiment of the method includes mapping a first set of architected registers defined by a first instruction set to a memory outside of a plurality of physical registers.
- the plurality of physical registers are configured to map to the first set, a second set of architected registers defined by a second construction set, and a set of rename registers.
- This embodiment of the method also includes adding the physical registers corresponding to the first set of architected registers to the set of rename registers.
- an apparatus for floating-point register caching.
- One embodiment of the apparatus includes a plurality of physical registers configured to be mapped to a first set of architected registers defined by a first instruction set, a second set of architected registers defined by a second instruction set, and a set of rename registers.
- the first set of architected registers can be mapped to a memory outside the physical registers so that the corresponding physical registers can be added to the set of rename registers.
- Another embodiment includes a computer readable media including instructions that when executed can configure a manufacturing process used to manufacture a semiconductor device that can be used for floating-point register caching.
- the manufactured semiconductor device includes a plurality of physical registers configured to be mapped to a first set of architected registers defined by a first instruction set, a second set of architected registers defined by a second instruction set, and a set of rename registers.
- the first set of architected registers can be mapped to a memory outside the physical registers so that the corresponding physical registers can be added to the set of rename registers.
- an apparatus for floating-point register caching.
- One embodiment of the apparatus includes a floating point unit configured to perform mathematical operations on floating point numbers and a plurality of physical registers implemented in the floating point unit and configured to store floating-point numbers.
- This embodiment of the apparatus also includes a memory outside of the physical registers. The memory is configured to store floating-point numbers.
- a first set of architected registers defined by a first instruction set, a second set of architected registers defined by a second instruction set, and a set of rename registers can be mapped to the plurality of physical registers.
- the first set of architected registers can also be mapped to the memory outside the physical registers so that the corresponding physical registers can be added to the set of rename registers.
- Figure 1 conceptually illustrates a first exemplary embodiment of a semiconductor device
- Figure 2A conceptually illustrates a first exemplary embodiment of a mapping of architected registers to physical registers
- Figure 2B conceptually illustrates a second exemplary embodiment of a mapping of architected registers to physical registers and a cache
- Figure 3 conceptually illustrates a first exemplary embodiment of a method for mapping architected registers to physical registers
- Figure 4 conceptually illustrates a second exemplary embodiment of a method for mapping architected registers to physical registers
- Figure 5 conceptually illustrates a third exemplary embodiment of a method for mapping architected registers to physical registers.
- the present application describes embodiments of a processor-based system that can execute instructions from multiple floating-point instruction sets.
- the system may be able to execute instructions from the x87 floating-point instruction set and instructions from the SSE floating-point instruction set.
- Each instruction set is allocated a number of architected registers that can be used by the instructions when performing various floating-point operations.
- the architected registers may be mapped to a set of physical registers implemented in or used by the processor-based system.
- the physical registers may also include rename registers that can be used to rename one or more of the architected registers so that multiple instructions can be executed in parallel (e.g., concurrently) even though the instructions may refer to the same architected register.
- the architected registers associated with either of the instruction sets can be offloaded to an outside memory such as a cache or other register structure.
- the physical registers that are associated with the offloaded architected registers can then be added to the set of rename registers.
- Figure 1 conceptually illustrates a first exemplary embodiment of a semiconductor device 100 that may be formed in or on a semiconductor wafer (or die).
- the semiconductor device 100 may formed in or on the semiconductor wafer using well known processes such as deposition, growth, photolithography, etching, planarising, polishing, annealing, and the like.
- the device 100 includes a central processing unit (CPU) 105 that is configured to access instructions and/or data that are stored in the main memory 110.
- the CPU 105 includes at least one CPU core 115 that is used to execute the instructions and/or manipulate the data.
- Many processor-based systems include multiple CPU cores 115.
- the CPU 105 also implements a hierarchical (or multilevel) cache system that is used to speed access to the instructions and/or data by storing selected instructions and/or data in the caches.
- a hierarchical (or multilevel) cache system that is used to speed access to the instructions and/or data by storing selected instructions and/or data in the caches.
- persons of ordinary skill in the art having benefit of the present disclosure should appreciate that alternative embodiments of the device 100 may implement different configurations of the CPU 105, such as configurations that use external caches.
- the techniques described in the present application may be applied to other processors such as graphical processing units (GPUs), accelerated processing units (APUs), and the like.
- GPUs graphical processing units
- APUs accelerated processing units
- the illustrated cache system includes a level 2 (L2) cache 120 for storing copies of instructions and/or data that are stored in the main memory 110.
- the L2 cache 120 is 16-way associative to the main memory 1 10 so that each line in the main memory 110 can potentially be copied to and from 16 particular lines (which are conventionally referred to as "ways") in the L2 cache 120.
- the main memory 110 and/or the L2 cache 120 can be implemented using any associativity.
- the L2 cache 120 may be implemented using smaller and faster memory elements.
- the L2 cache 120 may also be deployed logically and/or physically closer to the CPU core 115 (relative to the main memory 110) so that information may be exchanged between the CPU core 115 and the L2 cache 120 more rapidly and/or with less latency.
- the illustrated cache system also includes an LI cache 125 for storing copies of instructions and/or data that are stored in the main memory 1 10 and/or the L2 cache 120.
- the LI cache 125 may be implemented using smaller and faster memory elements so that information stored in the lines of the LI cache 125 can be retrieved quickly by the CPU 105.
- the LI cache 125 may also be deployed logically and/or physically closer to the CPU core 1 15 (relative to the main memory 110 and the L2 cache 120) so that information may be exchanged between the CPU core 115 and the LI cache 125 more rapidly and/or with less latency (relative to communication with the main memory 1 10 and the L2 cache 120).
- the LI cache 125 and the L2 cache 120 represent one exemplary embodiment of a multi-level hierarchical cache memory system.
- Alternative embodiments may use different multilevel caches including elements such as L0 caches, LI caches, L2 caches, L3 caches, and the like.
- the LI cache 125 is separated into level 1 (LI) caches for storing instructions and data, which are referred to as the LI -I cache 130 and the Ll-D cache 135. Separating or partitioning the LI cache 125 into an LI -I cache 130 for storing only instructions and an Ll-D cache 135 for storing only data may allow these caches to be deployed closer to the entities that are likely to request instructions and/or data, respectively. Consequently, this arrangement may reduce contention, wire delays, and generally decrease latency associated with instructions and data.
- a replacement policy dictates that the lines in the LI -I cache 130 are replaced with instructions from the L2 cache 120 and the lines in the Ll-D cache 135 are replaced with data from the L2 cache 120.
- LI cache 125 may not be partitioned into separate instruction-only and data-only caches 130, 135.
- the caches 120, 125, 130, 135 can be flushed by writing back modified (or "dirty") cache lines to the main memory 1 10 and invalidating other lines in the caches 120, 125, 130, 135.
- Cache flushing may be required for some instructions performed by the CPU 105, such as a RESET or a write-back-invalidate (WBINVD) instruction.
- RESET RESET
- WBINVD write-back-invalidate
- Processor-based systems utilize two basic memory access instructions: a store that puts (or stores) information in a memory location such as a register and a load that reads information out of a memory location.
- the CPU core 1 15 can execute programs that are formed using instructions such as loads and stores.
- programs are stored in the main memory 110 and the instructions are kept in program order, which indicates the logical order for execution of the instructions so that the program operates correctly.
- the main memory 110 may store instructions for a program 140 that includes the stores S I, S2 and the load LI in program order.
- the program 140 may also include other instructions that may be performed earlier or later in the program order of the program 140.
- the illustrated embodiment of the CPU core 115 includes a decoder 145 that selects and decodes program instructions so that they can be executed by the CPU core 115.
- the CPU core 1 15 is an out-of-order processor that can execute instructions in an order that differs from the program order of the instructions in the associated program.
- the decoder 145 may select and/or decode instructions from the program 140 in the order LI, SI, S2, which differs from the program order of the program 140 because the load LI is picked before the stores SI, S2.
- the decoder 145 can dispatch, send, or provide the decoded instructions to a load/store unit 150.
- the load/store unit 150 implements one or more store and/or load queues 155, 160 that are used to hold the stores and associated data.
- the load/store unit 150 may also implement an emulated memory (E-MEM) 165 that can emulate or imitate operations of other memory elements in the system 100.
- E-MEM emulated memory
- the store queues 155 shown in the illustrated embodiment are used to hold the stores and associated data.
- the data location for each store is indicated by a linear address, which may be translated into a physical address by the CPU core 115 so that data can be accessed from the main memory 1 10 and/or one of the caches 120, 125, 130, 135.
- the CPU core 1 15 may therefore be electronically and/or communicatively coupled to a translation look aside buffer (TLB) 170 that holds information that is used to translate linear addresses into physical addresses.
- TLB translation look aside buffer
- the load queues 160 shown in the illustrated embodiment are used to hold the loads and associated data. Load data may also be indicated by linear addresses and so the CPU core 1 15 may translate the linear addresses for load data into a physical address using information stored in the TLB 170.
- the load checks the TLB 170 and/or the data caches 120, 125, 130, 135 for the data used by the load.
- the load can also use the physical address to check the store queue 155 for address matches.
- linear addresses can be used to check the store queue 155 for address matches. If an address (linear or physical depending on the embodiment) in the store queue 155 matches the address of the data used by the load, then store-to-load forwarding can be used to forward the data from the store queue 155 to the load in the load queue 160.
- the CPU core 115 may also implement other functional elements.
- the CPU core 115 includes a floating point unit 175 that is used to perform mathematical operations on floating point numbers when required by the executed instructions.
- the floating-point unit 175 may be designed to carry out operations such as addition, subtraction, multiplication, division, and square root.
- Alternative embodiments of the floating-point unit 175 may also perform various transcendental functions such as exponential or trigonometric calculations.
- the floating-point unit 175 implements a register structure 180 that includes a plurality of physical registers 185 (only one indicated by a distinguishing numeral) that are used to hold floating-point numbers.
- the physical registers 185 can be mapped to architected registers associated with or defined by different floating-point instruction sets.
- one portion of the physical registers 185 can be mapped to architected registers (MMX) defined by the x87 floating-point instruction set and another portion of the physical registers 185 can be mapped to architected registers (XMM) defined by the SSE floating-point instruction set.
- the remaining portion of the physical registers 185 can be used as rename registers (REN), e.g., for supporting parallel and/or concurrent execution of instructions in an out-of-order processing environment.
- a first set of architected registers defined by a first instruction set may be mapped to a memory location that is outside the physical registers 185, such as one or more of the caches 120, 125, 130, 135 or the emulated memory 165.
- the physical registers 185 corresponding to the first set of architected registers can then be added to the set of rename registers. Mapping some of the architected registers to memory locations outside of the physical registers 185 therefore frees up additional registers for use as rename registers.
- the additional rename registers allow the system to use additional renames, which can increase the parallelism of the environment by allowing additional instructions to be executed concurrently. Increasing the number of renames may therefore increase the out-of-order window and improve the performance of the system 100.
- Figure 2A conceptually illustrates a first exemplary embodiment of a mapping of architected registers 200(1-3) to physical registers 205.
- the architected registers 200 include architected registers 200(1) defined for the x87 floatingpoint instruction set, architected registers 200(2) that are allocated as temporary registers for microcode, and architected registers 200(3) defined for the SSE floating-point instruction set.
- the architected registers 200 may be defined are configured to provide eight architected registers 200(1), eight architected registers 200(2), and 32 architected registers 200(3).
- the physical registers 205 include entries that can be mapped to the architected registers 200 and/or used as a rename registers, as indicated by the block arrows.
- the physical registers 205 may include 88 entries so that 48 entries can be mapped to the architected registers 200 and the other 40 entries can be used as rename registers.
- the number of the architected registers 200 and/or the number of physical registers 205 is a matter of design choice and may be different in alternative embodiments.
- Figure 2B conceptually illustrates a second exemplary embodiment of a mapping of architected registers to physical registers and a cache.
- a portion of the architected registers 200 are mapped to memory and/or register structures that are located outside of the physical registers 205.
- a portion of the architected registers 200 may be mapped to a cache 210.
- the outside memory may include various caches implemented in the processor system, emulated memory, and the like.
- a portion of the architected registers 200 that are mapped to the cache 210 may include some or all of the architected registers 200 associated with a particular instruction set.
- the architected registers 200(1) defined for the x87 floating-point instruction set can be mapped to the cache 210.
- mapping is intended to be illustrative.
- architected registers 200 associated with different instruction sets can be mapped to the cache 210.
- a portion of one or more of the architected registers 200 that includes less than all of the entries associated with a particular instruction set can be mapped to the cache 210.
- the first and second exemplary embodiments of the register mapping shown in Figures 2A and 2B may represent two states of the architected registers 200, the physical registers 205, and the cache 210.
- the processor system may be initialized into either one of these states, e.g., depending on the expected demand usage associated with the different registers and/or instruction sets.
- the processor system may transition between these two states dynamically. For example, the system may be initialized into the state illustrated in Figure 2B so that additional rename registers are available when demand usage for the x87 instruction set associated with the architected registers 200(1) is expected to be small.
- the mapping of the architected registers 200 can be changed so that the system shifts into the state illustrated in Figure 2A.
- Persons of ordinary skill in the art having benefit of the present disclosure should appreciate that this particular set of register states and state transitions depicted in Figures 2A and 2B is intended to be illustrative. In alternative embodiments, states representing other mappings between the architected registers 200, the physical registers 205, and the cache 210 can be used and other criteria for transitioning between the states can be defined.
- mappings from the architected registers 200 to the physical registers 205 and/or the cache 210 depicted in Figures 2A and 2B are relatively ordered and may be sequential and/or block-based. However, in alternative embodiments, other mappings can be used. For example, any of the entries in the architected registers 200 could be mapped to any entries in the physical registers 205 and/or the cache 210. In some cases, the mapping may be random or based on other criteria.
- Figure 3 conceptually illustrates a first exemplary embodiment of a method 300 for mapping architected registers to physical registers.
- the physical registers are configured so that architected registers associated with two or more different floating-point instruction sets can be mapped to the physical registers and used when the processor system is executing instructions.
- the physical registers also include additional entries that can be used as rename registers.
- the mapping can be initialized so that some of the architected registers are mapped to a memory outside of the physical registers.
- a first set of architected registers (which are drawn from architected registers defined for a first floating-point instruction set) is mapped (at 305) to a cache that is located physically and/or logically outside of the physical registers.
- a second set of architected registers is mapped (at 310) to the physical registers.
- the unmapped physical registers that were intended to be mapped to the first set of architected registers can then be added (at 315) to the set of rename registers that are supported by the physical registers.
- Figure 4 conceptually illustrates a second exemplary embodiment of a method 400 for mapping architected registers to physical registers.
- a first set of architected registers has been offloaded to a memory outside of the physical registers.
- the mapping of the architected registers may have been initialized according to embodiments of the technique depicted in Figure 3.
- Demand usage for instructions defined by the different floating-point instruction sets can be monitored (at 405). As long as there is no demand for instructions in the first instruction set associated with the first set of architected registers, or the demand usage remains (at 410) below a threshold, the system may maintain the initial and/or current mapping and continue to monitor (at 405) demand usage for the different instruction sets.
- the register mapping can be modified when the demand usage for instructions in the first instruction set rises (at 410) above a threshold.
- the threshold may be set to zero so that using any instructions from the first instruction set can trigger a remapping.
- the threshold maybe set to a non-zero number indicating a level of demand usage that triggers the re-mapping.
- the first set of architected registers can be mapped or re-mapped (at 415) to the physical registers when the demand usage is greater than the threshold (at 410).
- demand usage associated with the first instruction set may trigger a fault, such as the microcode fault, that allows the architected registers to be mapped (at 415) from an external cache back to the physical registers.
- the contents of the architected registers that are stored in the external cache can also be written into the appropriate locations in the physical registers.
- the physical registers that have been mapped or re-mapped (at 415) to the first set of architected registers may then be removed (at 420) from the set of rename registers supported by the physical registers.
- a fault or rest may not be triggered.
- the re-mapping (at 415) may be handled internally by the floating point architecture with less disruption. For example, a floating point unit could force a few stalls and directly copy information between the physical registers and the local side register structure.
- Figure 5 conceptually illustrates a third exemplary embodiment of a method 500 for mapping architected registers to physical registers.
- the third exemplary embodiment of the method 500 may be used independently of, in conjunction with, concurrently with, and/or in addition to the second exemplary embodiment of the method 400 depicted in Figure 4.
- architected registers for two different floating point instruction sets have been mapped to the physical registers.
- the architected registers may have been initialized according to embodiments of the technique depicted in Figure 3 and then re-mapped to the physical registers in response to changes in demand usage as depicted in Figure 4.
- mapping of the architected registers to the physical registers does not have to correspond to the state discussed in Figure 4 nor does the system have to have followed the route described in Figures 3-4 to arrive at a state that can utilize embodiments of the method 500.
- the method 500 may be used regardless of the particular mapping of the architected registers to the physical registers.
- Demand usage for instructions defined by the different floating-point instruction sets can be monitored (at 505). As long as the demand usage for instructions in the first instruction set associated with the first set of architected registers remains (at 510) above a threshold, the system may maintain the current mapping and continue to monitor (at 405) demand usage for the different instruction sets.
- the threshold maybe set to zero so that a re-mapping is triggered when the demand usage falls to zero.
- the threshold maybe set to a non-zero number indicating a level of demand usage that triggers the re-mapping.
- the re-mapping can be triggered when the demand usage has dropped below the threshold for a selected period of time or number of cycles.
- the first set of architected registers can be mapped (at 515) to a memory outside the physical registers, such as a cache. Information stored in the re-mapped physical registers may also be copied or written from the physical registers to the cache entries indicated by the mapping (at 515). The physical registers associated with the cached first set can then be added (at 520) to set available rename registers. Re-mapping (at 515) architected registers for the instruction set that is not being used (or is being used at a relatively low demand usage) may therefore increase the available number of rename registers, resulting in a net performance improvement for the system. For example, the out-of-order window for a processor that can perform out-of-order execution can be increased when the number of available rename registers increases.
- Embodiments of processor systems that support for floating-point register caching as described herein can be fabricated in semiconductor fabrication facilities according to various processor designs.
- a processor design can be represented as code stored on a computer readable media.
- Exemplary codes that may be used to define and/or represent the processor design may include HDL, Verilog, and the like.
- the code may be written by engineers, synthesized by other processing devices, and used to generate an intermediate representation of the processor design, e.g., netlists,
- the intermediate representation can be stored on computer readable media and used to configure and control a manufacturing/fabrication process that is performed in a semiconductor fabrication facility.
- the semiconductor fabrication facility may include processing tools for performing deposition, photolithography, etching, polishing/planarizing, metrology, and other processes that are used to form transistors and other circuitry on semiconductor substrates.
- the processing tools can be configured and are operated using the intermediate representation, e.g., through the use of mask works generated from GDSII data.
- displaying refers to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
- the software implemented aspects of the disclosed subject matter are typically encoded on some form of program storage medium or implemented over some type of transmission medium.
- the program storage medium may be magnetic (e.g., a floppy disk or a hard drive) or optical (e.g., a compact disk read only memory, or "CD ROM"), and may be read only or random access.
- the transmission medium may be twisted wire pairs, coaxial cable, optical fiber, or some other suitable transmission medium known to the art.
- the disclosed subject matter is not limited by these aspects of any given implementation.
- the particular embodiments disclosed above are illustrative only, as the disclosed subject matter may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Advance Control (AREA)
- Executing Machine-Instructions (AREA)
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201180059045.0A CN103262028B (zh) | 2010-10-07 | 2011-10-04 | 用于浮点寄存器高速缓存的方法及设备 |
| EP11771344.6A EP2625599B1 (en) | 2010-10-07 | 2011-10-04 | Method and apparatus for floating point register caching |
| KR1020137008998A KR101797187B1 (ko) | 2010-10-07 | 2011-10-04 | 부동 소수점 레지스터 캐싱을 위한 방법 및 장치 |
| JP2013532870A JP5703382B2 (ja) | 2010-10-07 | 2011-10-04 | 浮動小数点レジスタキャッシングのための方法及び装置 |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US12/900,124 US9626190B2 (en) | 2010-10-07 | 2010-10-07 | Method and apparatus for floating point register caching |
| US12/900,124 | 2010-10-07 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2012047833A1 true WO2012047833A1 (en) | 2012-04-12 |
Family
ID=44863229
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2011/054688 Ceased WO2012047833A1 (en) | 2010-10-07 | 2011-10-04 | Method and apparatus for floating point register caching |
Country Status (6)
| Country | Link |
|---|---|
| US (1) | US9626190B2 (enExample) |
| EP (1) | EP2625599B1 (enExample) |
| JP (1) | JP5703382B2 (enExample) |
| KR (1) | KR101797187B1 (enExample) |
| CN (1) | CN103262028B (enExample) |
| WO (1) | WO2012047833A1 (enExample) |
Families Citing this family (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8914615B2 (en) * | 2011-12-02 | 2014-12-16 | Arm Limited | Mapping same logical register specifier for different instruction sets with divergent association to architectural register file using common address format |
| US9632947B2 (en) * | 2013-08-19 | 2017-04-25 | Intel Corporation | Systems and methods for acquiring data for loads at different access times from hierarchical sources using a load queue as a temporary storage buffer and completing the load early |
| CN105993000B (zh) * | 2013-10-27 | 2021-05-07 | 超威半导体公司 | 用于浮点寄存器混叠的处理器和方法 |
| JP6493088B2 (ja) * | 2015-08-24 | 2019-04-03 | 富士通株式会社 | 演算処理装置及び演算処理装置の制御方法 |
| US10838733B2 (en) | 2017-04-18 | 2020-11-17 | International Business Machines Corporation | Register context restoration based on rename register recovery |
| US10545766B2 (en) | 2017-04-18 | 2020-01-28 | International Business Machines Corporation | Register restoration using transactional memory register snapshots |
| US10649785B2 (en) | 2017-04-18 | 2020-05-12 | International Business Machines Corporation | Tracking changes to memory via check and recovery |
| US10963261B2 (en) | 2017-04-18 | 2021-03-30 | International Business Machines Corporation | Sharing snapshots across save requests |
| US10782979B2 (en) | 2017-04-18 | 2020-09-22 | International Business Machines Corporation | Restoring saved architected registers and suppressing verification of registers to be restored |
| US11010192B2 (en) | 2017-04-18 | 2021-05-18 | International Business Machines Corporation | Register restoration using recovery buffers |
| US10572265B2 (en) | 2017-04-18 | 2020-02-25 | International Business Machines Corporation | Selecting register restoration or register reloading |
| US10740108B2 (en) | 2017-04-18 | 2020-08-11 | International Business Machines Corporation | Management of store queue based on restoration operation |
| US10761983B2 (en) * | 2017-11-14 | 2020-09-01 | International Business Machines Corporation | Memory based configuration state registers |
| US10592164B2 (en) | 2017-11-14 | 2020-03-17 | International Business Machines Corporation | Portions of configuration state registers in-memory |
| US10853078B2 (en) * | 2018-12-21 | 2020-12-01 | Intel Corporation | Method and apparatus for supporting speculative memory optimizations |
| KR20210134915A (ko) * | 2019-02-20 | 2021-11-11 | 옵티멈 세미컨덕터 테크놀로지스 인코포레이티드 | 좌표 회전 디지털 컴퓨터(cordic)를 사용하여 부동 소수점 삼각 함수의 하드웨어 효율적인 적응적 계산을 위한 장치 및 방법 |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6237076B1 (en) * | 1998-08-19 | 2001-05-22 | International Business Machines Corporation | Method for register renaming by copying a 32 bits instruction directly or indirectly to a 64 bits instruction |
| US20070162726A1 (en) | 2006-01-10 | 2007-07-12 | Michael Gschwind | Method and apparatus for sharing storage and execution resources between architectural units in a microprocessor using a polymorphic function unit |
Family Cites Families (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2000056969A (ja) * | 1998-08-07 | 2000-02-25 | Matsushita Electric Ind Co Ltd | レジスタファイル |
| US6425072B1 (en) | 1999-08-31 | 2002-07-23 | Advanced Micro Devices, Inc. | System for implementing a register free-list by using swap bit to select first or second register tag in retire queue |
| US6772317B2 (en) * | 2001-05-17 | 2004-08-03 | Intel Corporation | Method and apparatus for optimizing load memory accesses |
| US7065631B2 (en) * | 2002-04-09 | 2006-06-20 | Sun Microsystems, Inc. | Software controllable register map |
| US7127592B2 (en) * | 2003-01-08 | 2006-10-24 | Sun Microsystems, Inc. | Method and apparatus for dynamically allocating registers in a windowed architecture |
| US7506139B2 (en) * | 2006-07-12 | 2009-03-17 | International Business Machines Corporation | Method and apparatus for register renaming using multiple physical register files and avoiding associative search |
| US7475224B2 (en) * | 2007-01-03 | 2009-01-06 | International Business Machines Corporation | Register map unit supporting mapping of multiple register specifier classes |
| US8140780B2 (en) * | 2008-12-31 | 2012-03-20 | Micron Technology, Inc. | Systems, methods, and devices for configuring a device |
| US8266411B2 (en) | 2009-02-05 | 2012-09-11 | International Business Machines Corporation | Instruction set architecture with instruction characteristic bit indicating a result is not of architectural importance |
| US8707015B2 (en) | 2010-07-01 | 2014-04-22 | Advanced Micro Devices, Inc. | Reclaiming physical registers renamed as microcode architectural registers to be available for renaming as instruction set architectural registers based on an active status indicator |
| US8972701B2 (en) | 2011-12-06 | 2015-03-03 | Arm Limited | Setting zero bits in architectural register for storing destination operand of smaller size based on corresponding zero flag attached to renamed physical register |
-
2010
- 2010-10-07 US US12/900,124 patent/US9626190B2/en active Active
-
2011
- 2011-10-04 JP JP2013532870A patent/JP5703382B2/ja active Active
- 2011-10-04 KR KR1020137008998A patent/KR101797187B1/ko active Active
- 2011-10-04 WO PCT/US2011/054688 patent/WO2012047833A1/en not_active Ceased
- 2011-10-04 CN CN201180059045.0A patent/CN103262028B/zh active Active
- 2011-10-04 EP EP11771344.6A patent/EP2625599B1/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6237076B1 (en) * | 1998-08-19 | 2001-05-22 | International Business Machines Corporation | Method for register renaming by copying a 32 bits instruction directly or indirectly to a 64 bits instruction |
| US20070162726A1 (en) | 2006-01-10 | 2007-07-12 | Michael Gschwind | Method and apparatus for sharing storage and execution resources between architectural units in a microprocessor using a polymorphic function unit |
Non-Patent Citations (1)
| Title |
|---|
| BUTTS, J.A. ET AL.: "Use-Based Register Caching with Decoupled Indexing", PROCEEDINGS OF THE 31ST INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE, June 2004 (2004-06-01), München/ Germany, pages 1 - 12, XP002665885 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN103262028B (zh) | 2016-02-10 |
| JP2014500993A (ja) | 2014-01-16 |
| EP2625599B1 (en) | 2014-09-03 |
| KR101797187B1 (ko) | 2017-11-13 |
| US20120089807A1 (en) | 2012-04-12 |
| US9626190B2 (en) | 2017-04-18 |
| CN103262028A (zh) | 2013-08-21 |
| EP2625599A1 (en) | 2013-08-14 |
| JP5703382B2 (ja) | 2015-04-15 |
| KR20130127437A (ko) | 2013-11-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP2625599B1 (en) | Method and apparatus for floating point register caching | |
| US8713263B2 (en) | Out-of-order load/store queue structure | |
| EP2476060B1 (en) | Store aware prefetching for a datastream | |
| US9448936B2 (en) | Concurrent store and load operations | |
| JP6143872B2 (ja) | 装置、方法、およびシステム | |
| US8412911B2 (en) | System and method to invalidate obsolete address translations | |
| US9213640B2 (en) | Promoting transactions hitting critical beat of cache line load requests | |
| US7644255B2 (en) | Method and apparatus for enable/disable control of SIMD processor slices | |
| US20180349280A1 (en) | Snoop filtering for multi-processor-core systems | |
| US9471494B2 (en) | Method and apparatus for cache line write back operation | |
| US9952989B2 (en) | Aggregation of interrupts using event queues | |
| US9489203B2 (en) | Pre-fetching instructions using predicted branch target addresses | |
| US20130024647A1 (en) | Cache backed vector registers | |
| US20140129806A1 (en) | Load/store picker | |
| WO2014084918A1 (en) | Providing extended cache replacement state information | |
| WO2006132798A2 (en) | Microprocessor including a configurable translation lookaside buffer | |
| TWI465920B (zh) | 結構存取處理器、方法、系統及指令 | |
| CN111752616A (zh) | 用于符号存储地址生成的系统、装置和方法 | |
| US8645588B2 (en) | Pipelined serial ring bus | |
| US20090300319A1 (en) | Apparatus and method for memory structure to handle two load operations | |
| US20140310500A1 (en) | Page cross misalign buffer | |
| CN111095203A (zh) | 实时寄存器值的集群间通信 | |
| WO2013147895A2 (en) | Dynamic physical register use threshold adjustment and cross thread stall in multi-threaded processors |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11771344 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2013532870 Country of ref document: JP Kind code of ref document: A |
|
| ENP | Entry into the national phase |
Ref document number: 20137008998 Country of ref document: KR Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2011771344 Country of ref document: EP |