GB2410584A - A simultaneous multi-threading processor accessing a cache in different power modes according to a number of threads - Google Patents

A simultaneous multi-threading processor accessing a cache in different power modes according to a number of threads Download PDF

Info

Publication number
GB2410584A
GB2410584A GB0508862A GB0508862A GB2410584A GB 2410584 A GB2410584 A GB 2410584A GB 0508862 A GB0508862 A GB 0508862A GB 0508862 A GB0508862 A GB 0508862A GB 2410584 A GB2410584 A GB 2410584A
Authority
GB
United Kingdom
Prior art keywords
performance level
threads
circuit
smt processor
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0508862A
Other versions
GB2410584B (en
GB0508862D0 (en
Inventor
Gi-Ho Park
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/631,601 external-priority patent/US7152170B2/en
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority claimed from GB0403738A external-priority patent/GB2398660B/en
Publication of GB0508862D0 publication Critical patent/GB0508862D0/en
Publication of GB2410584A publication Critical patent/GB2410584A/en
Application granted granted Critical
Publication of GB2410584B publication Critical patent/GB2410584B/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • G06F9/3851Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution from multiple instruction streams, e.g. multistreaming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30181Instruction operation extension or modification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30181Instruction operation extension or modification
    • G06F9/30189Instruction operation extension or modification according to execution mode, e.g. mode flag
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0842Multiuser, multiprocessor or multiprocessing cache systems for multiprocessing or multitasking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1028Power efficiency
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A cache memory associated with a Simultaneous Multi-Threading (SMT) processor, which comprises a tag memory and a data memory. The tag and data memories are accessed in two modes: concurrently, with each accessed at the same time, or subsequently, with the tag memory being accessed before the data memory. The mode of memory access is chosen according to the number of threads running on the processor, allowing the processor to operate in a high-power or low-power mode, thus scaling power consumption according to activity.

Description

24 1 0584
SIMULTANEOUS MULTI-THREADING PROCESSOR CIRCUITS AND
COMPUTER PROGRAM PRODUCTS CONFIGURED TO OPERATE AT
DIFFERENT PERFORMANCE LEVELS BASED ON A NUMBER OF
OPERATING THREADS AND METHODS OF OPERATING
CLAIM FOR PRIORITY
This application claims priority to Korean Application No. 2003-10759 filed February 20, 2003, the entire contents of which are incorporated herein by reference.
FIELD OF THE INVENTION
The invention relates to computer processor architecture in general, and more particularly to simultaneous multi-threading computer processors, associated computer program products, and methods of operating same.
1 5 BACKGROUND
Simultaneous Multi-Threading (SMT) is a processor architecture that uses hardware multithreading to allow multiple independent threads to issue instructions during each cycle. Unlike other hardware multithreaded architectures in which only a single hardware context (i.e., thread) is active on any given cycle, SMT architecture can allow all thread contexts to simultaneously compete for and share processor resources.
An SMT processor can utilize otherwise wasted cycles to execute instructions that may reduce the effects of long latency operations in the SMT processor.
Moreover, as the number of threads increases, so may the performance also increase, which may also increase the power consumed by the SMT processor.
A block diagram of a conventional SMT processor is illustrated in Figure 1.
The operation of the conventional SMT processor in Figure 1 is discussed in Dean M. Tullsen; Susan J. Egger; Henry M. Levy; Jack L. Lo; Rebecca L. Stamm; et al., Exploiting Choice. Instruction Fetch and Issue on an Implementable Simultaneous Multithreadng Processor, The 23r Annual International Symposium on Computer Architecture, pp. 191-202, 1996, the disclosure of which is hereby incorporated herein by reference. The architecture and operation of conventional SMT processors is well understood In the art and will not be discussed herein in further detail.
SUMMARY
Embodiments according to the invention can provide processing circuits, computer program products, and or methods for operating at different performance levels based on a number of threads operated by a Simultaneous Multi-Threading (SMT) processor. For example, in some embodiments according to the invention, processing circuits, such as a floating point unit or a data cache, that are associated with the operation of a thread in the SMT processor can operate in one of a high power mode or a low power mode based on the number of threads currently operated by the SMT processor. Furthermore, as the number of threads operated by the SMT operator increases, the performance levels of the processing circuits can be decreased, thereby providing the architectural benefits of the SMT processor while allowing a reduction in the amount of power consumed by the processing circuits associated with the threads. Alternatively, the SMT processor may operate at the same power, but at higher performance or may consume more power but perform at higher performance levels than conventional SMT processors.
In some embodiments according to the invention, the processing circuit can be configured to operate at a first performance level when the number of threads currently operated by the SMT processor is less than or equal to a threshold value and can be configured to operate at a second performance level when the number of threads currently operated by the SMT processor is greater than the threshold value.
In some embodiments according to the invention, a performance level control circuit can be configured to provide a performance level for the processing circuit based on the number of threads currently operated by the SMT processor. In some embodiments according to the invention, the performance level control circuit can increase the performance level provided to the processing circuit to a first performance level when the number of threads currently operated by the SMT processor is less than or equal to a threshold value. The performance level control circuit can decrease the performance level provided to the at least one processing circuit to a second performance level that is less than the first performance level when the number of threads currently operated by the SMT processor exceeds the threshold value.
In some embodiments according to the invention, the performance level control circuit further decreases the performance level provided to the processing circuit to a third perfonnance level that is less than the second performance level when the number of threads currently operated by the SMT processor exceeds a second threshold value that is greater than the first threshold value.
Various embodiments of performance level variation can be provided according to the invention. For example, in some embodiments according to the invention, the processing circuit can be a cache memory circuit that includes a tag memory and a data memory configured to provide cached data concurrent with an access to the tag memory when the cache memory circuit operates at a first performance level. The data memory can be configured to provide cached data responsive to a hit in the tag memory when the cache memory circuit operates at a second performance level that is less than the first performance level.
In some embodiments according to the invention, the cache memory can be at least one of a data cache memory configured to store data operated on by instructions and an instruction cache memory configured to store instructions that operate on associated data. In some embodiments according to the invention, the data memory can be further configured to not provide cached data responsive to a miss in the tag memory when operating at the second performance level.
In some embodiments according to the invention, the processing circuit can be a floating point unit. In some embodiments according to the invention, the floating point unit can be a first floating point unit configured to operate at a first performance level when the number of threads operated by the SMT processor is less than or equal to a threshold value and the SMT processor can further include a second floating point circuit that configured to operate at a second performance level, that is less than the first performance level, when the number of threads operated by the SMT processor is greater than the threshold value.
In some embodiments according to the invention, the performance level control circuit can be configured to increase or decrease the number of threads currently operated by the SMT processor responsive to threads being created and completed, respectively, in the SMT processor.
In some embodiments according to the invention, a second processing circuit can be configured to operate at a second performance level that is less than the first performance level responsive to the number of threads currently operated in the SMT processor being increased to greater than the threshold value.
In some embodiments according to the invention, the performance level control circuit can be configured to decrease a perfonnance level provided to the at least one processing circuit responsive to creation of a new thread to increase the number of threads currently operated by the SMT processor from less than or equal to a threshold value to greater than the threshold value. In some embodiments according to the invention, the performance level control circuit can be configured to reduce a performance level of the processing circuit to one of a plurality of descending performance levels as the number of threads currently operated by the SMT processor exceeds each of a plurality of ascending threshold values.
In some embodiments according to the invention, the performance level control circuit can be configured to maintain a first performance level for a first processing circuit and to provide a second performance level, that is less than the first performance level, to a second processing circuit responsive to the number of threads currently operated by the SMT processor increasing from less than or equal to a threshold value to greater than the threshold value.
In other embodiments according to the invention, a performance level control circuit can be configured to provide a performance level to processing circuits in the SMT processor based on a number of threads currently operated by the SMT processor.
In still other embodiments according to the invention, a thread management circuit can be configured to assign processing circuits associated with the SMT processor to threads operated in the SMT processor as the threads are created. A performance level control circuit can be configured to provide one of a plurality of performance levels to the processing circuits based on a number of threads currently operated by the SMT processor compared to at least one threshold value.
In still other embodiments according to the invention, a cache memory associated with an SMT processor can include a tag memory and a data memory accessed either concurrently or subsequent to the tag memory based on a number of threads currently operated by the SMT processor.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure I is a block diagram that illustrates a conventional Simultaneous Multi Threading (SMT) processor architecture.
Figure 2 is a block diagram that illustrates embodiments of an SMT processor according to the invention.
Figure 3 is a block diagram that illustrates embodiments of a thread management circuit according to the invention.
Figure 4 is a block diagram that illustrates embodiments of a performance level control circuit according to the invention.
Figure 5 is a flowchart that illustrates embodiments of performance level control circuits according to the invention.
Figure 6 is a block diagram that illustrates embodiments of a cache memory according to the invention.
Figure 7 is a block diagram that illustrates embodiments of an SMT processor according to the invention.
Figure 8 is a block diagram that illustrates embodiments of an SMT processor according to the invention.
Figure 9 is a block diagram that illustrates embodiments of an SMT processor according to the invention.
Figure 10 is a block diagram that illustrates embodiments of a performance level control circuit according to the invention.
Figure 11 is a flowchart that illustrates embodiments of a performance level control circuit according to the invention.
DESCRIPTION OF EMBODIMENTS ACCORDING TO THE INVENTION
The invention now will be described more fully hereinafter with reference to the accompanying drawings, in which illustrative embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Like numbers refer to like elements throughout.
It will be understood that although the terms first and second are used herein to describe various elements, these elements should not be limited by these terms.
These teens are only used to distinguish one element from another element. Thus, a first element discussed below could be teemed a second element, and similarly, a second element may be termed a first element without departing from the teachings of
this disclosure.
As will be appreciated by one of skill in the art, the present invention may be embodied as circuits, computer program products, and/or computer program products.
Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the invention may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium. Any suitable computer readable medium may be utilized including hard disks, CD-ROMs, optical storage devices, or magnetic storage devices.
Computer program code or "code" for carrying out operations according to the present invention may be written in an object oriented programming language such as JAVA (RTM), Smalltalk or Cow, JavaScript (RTM), Visual Basic, TSQL, Pert, or in various other programming languages. Software embodiments of the present invention do not depend on implementation with a particular programming language. Portions of the code may execute entirely on one or more systems utilized by an intermediary server.
The code may execute entirely on one or more computer systems, or it may execute partly on a server and partly on a client within a client device, or as a proxy server at an intermediate point in a communications network. In the latter scenario, the client device may be connected to a server over a LAN or a WAN (e.g. an intranet), or the connection may be made through the Intemet (e.g., via an Internet Service Provider). The invention may be embodied using various protocols over various types of computer networks.
The invention is described below with reference to block diagrams and flowchart illustrations of methods, systems and computer program products according to embodiments of the invention. It is understood that each block of the block diagrams and flowcharts, and combinations of blocks in the block diagrams and flowcharts, can be implemented by computer program instructions. These computer program instructions may be provided to a Simultaneous Multi-Threading (SMT) processor circuit, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the block diagrams and/or flowchart block or blocks.
These computer program instructions may be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture including instruction means which implement the function specified in the block diagrams and/or flowchart block or blocks.
The computer program instructions may be loaded into an SMT processor circuit or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the block diagrams and/or flowchart block or blocks.
Embodiments according to the invention can provide processing circuits that are associated with the operation of threads in an SMT processor wherein the processing circuits are configured to operate at different performance levels based on a number of threads currently operated by the SMT processor. It will be understood that different performance levels can include different operating speeds of circuits and/or different levels of precision. In some embodiments according to the invention, processing circuits according to the invention may operate at different clock speeds and/or use different circuit types (such different types of CMOS devices) to provided the different performance levels. For example, in some embodiments according to the invention, processing circuits, such as a floating point unit or a data cache, that are associated with the operation of a thread in the SMT processor can operate in one of a high power mode at a high clock speed or a low power mode at a lower clock speed based on the number of threads currently operated by the SMT processor.
Furthermore, as the number of threads operated by the SMT operator increases, the performance levels of the processing circuits can be decreased, thereby providing the architectural benefits of the SMT processor while allowing a reduction in the amount of power consumed by the processing circuits associated with the threads.
It will be understood that embodiments according to the invention can exhibit thread-level parallelism that can use multiple threads of execution that are inherently parallel to one another. As used herein, a "thread" can be a separate process having associated instructions and data. A thread can represent a process that is a portion of a parallel computer program having multiple processes. A thread can also represent a separate computer program that operates independently from other programs. Each thread can have an associated state, defined, for example, by respective states for associated instructions, data, Program Counter, and/or registers. The associated state for the thread can include enough information for the thread to be executed by a processor.
In some embodiments according to the invention, a performance level control circuit is configured to provide the respective performance levels to the processing circuits that are allocated to the threads created in the SMT processor. For example, the performance level control circuit can provide a first performance level so that a processing circuit can operate in a high power mode and, further, can provide a second performance level to the processing circuit for operation in a low power mode.
In still other embodiments according to the invention, intermediate performance levels (i.e., other performance levels between high power and low power) are provided by the performance level control circuit.
In some embodiments according to the invention, the processing circuits that operates at different performance levels can be a cache memory that includes a tag memory and a data memory. When the cache memory operates at the first performance level (i.e., in high power mode), the tag memory and data memory can be accessed concurrently regardless of whether an access to the tag memory result in a hit. The concurrent access of the data memory can provide greater performance as the hit rate in the tag memory may be high. Alternatively, the cache memory can also operate at a second performance level (i.e., lower power mode) wherein the data memory is only accessed responsive to a hit in the tag memory. Therefore, some of the power consumption associated with accessing the data memory can be avoided in cases where a tag miss occurs. Furthermore, in cases where a tag hit occurs, the access to the tag memory and the access to data memory may be offset in time.
In still other embodiments, the processing circuits associated with the operation of threads by the SMT processor can be an instruction cache or other types of processing circuits, such as floating point circuits or integer/load-store circuits.
Moreover, each of these processing circuits may operate at different performance levels. For example, in some embodiments according to the invention, the cache memory, the instruction cache, and floating point circuits and integer/load-store circuits can operate at different performance levels concurrently.
In still further embodiments according to the invention, processing circuits of the same type (such as floating point circuits and integer/load-store circuits) can be separated into different performance categories such that some of the circuits are designated to operate at the first performance level whereas other processing circuits are designated to operate at the second performance level. For example, in some embodiments according to the invention, some of the floating point circuits available for allocation to threads in the SMT processor are configured to operate in a high power mode whereas other floating point circuits available for allocation to threads in the SMT processor are configured to operate in low power mode.
Figure 2 is a block diagram that illustrates embodiments of SMT processors according to the invention. According to Figure 2, when a new thread is created in an SMT processor 200, a thread management circuit 205 allocates a set of processing circuits for use by the newly created thread. The allocated processing circuits can include a program counter 215, a set of floating point registers 245, and a set of integer registers 250. Other processing circuits can also be allocated to the newly created thread. It will be understood that when the thread completes, the processing circuits allocated for use by the thread can be released so that they may be reallocated to subsequently created threads.
In operation, a fetch circuit 210 fetches an instruction from an instruction cache 220, based on a location provided by the allocated program counter 215, which is provided to a decoder 225. The decoder 225 outputs a decoded instruction to a register renaming circuit 230. A renamed instruction is provided by the register renaming circuit 230 to either a floating point instruction queue 235 or an integer instruction queue 240 depending on the type of instruction provided by the register renaming circuit 230. For example, if the type of instruction provided by the register renaming circuit 230 is a floating point instruction, the instruction will be loaded into the floating point instruction queue 235, whereas if the instruction provided by the register renaming circuit 230 is an integer instruction the instruction is loaded into the integer instruction queue 240.
The instructions from either the floating point instruction queue 235 or the integer instruction queue 240 are loaded into an associated register for execution by a respective floating point circuit 255 or integer/loadstore circuit 260. In particular, floating point instructions are transferred from the floating point instructions queue 235 to a set of floating point registers 245. The instructions in the floating point registers 245 can be accessed by the floating point circuits 255. The floating point circuits 255 can also access floating point data stored in a data cache 265 such as when instructions executed by the floating point circuits 255 (from the floating point registers 245) refer to data stored in the data cache 265.
Integer instructions are transferred from the integer instruction queue 240 to integer registers 250. The integer/load-store circuits 260 can access the integer instructions stored in the integer registers 250 so that the instructions can be executed.
The integer/load-store circuits 260 can also access the data cache 265 when, for example, the integer instructions stored in the integer registers 250 refer to integer data stored in the data cache 265.
According to embodiments of the invention, the thread management circuit 205 provides a performance level to the data cache 265. In particular, the performance level can control whether the data cache 265 operates at a first performance level or a second performance level (i. e., in a high power mode or in a low power mode). For example, the thread management circuit 205 can provide a first performance level wherein the data cache 265 operates in a high power mode or can provide a second performance level wherein the data cache 265 operates in a low power mode. It will be understood that although the operation of the data cache 265 is described as being either at a first performance level or a second performance level, in some embodiments according to the invention, more performance levels can be used.
Figure 3 is a block diagram that illustrates embodiments of thread management circuits according to the invention. According to Figure 3, a thread management circuit 305 receives information from the operating system, or alternatively, from a thread generation circuit related to the creation of a thread in the SMT processor. The thread management circuit 305 includes a thread allocation circuit 330 that can allocate processing circuits according to the invention for use by the thread created by the SMT processor.
The thread management circuit 305 also includes a performance level control circuit 340 that provides the performance level to the processing circuits associated with the thread created by the SMT processor. The performance level control circuit 340 can provide the performance level to the processing circuit based on the number of threads currently operated by the SMT processor. In particular, as the number of threads operated by the SMT processor increases, the performance level control circuit may provide decreasing performance levels to the processing circuits associated with the threads operated by the SMT processor. The performance level control circuit 340 can determine the number of threads currently operated by the SMT processor by incrementing and decrementing an internal count responsive to the creation and completion of threads operated by the SMT processor.
It will be understood that the performance level provided to the processing circuits according to the invention may have a default value, such as the first performance level (or high power mode). Accordingly, as threads are added, the performance level provided to the processing circuits can be reduced to decrease the performance and, therefore, the power dissipation of the processing circuits. It will also be understood that the performance level can be provided to the processing circuits via a signal line that can conduct a signal having at least two states: the first performance level and the second performance level. For example, after the SMT processor is initialized, the number of threads operated by the SMT processor can be zero, wherein the default value of the performance level provided to the processing circuits is the default first performance level (high power mode). As threads are added and eventually exceed a threshold number, the performance level can be changed to the second performance level by, for example, changing the state of the signal that indicates which performance level is to be used.
Figure 4 is a block diagram that illustrates embodiments of performance level control circuits according to the invention. According to Figure 4, a counter circuit 405 can receive information from the operating system or thread generation circuit discussed in reference to Figure 3 to determine the number of threads currently operated by the SMT processor. For example, if the counter circuit 405 indicates that four threads have previously been started by the SMT processor when information is received regarding the creation of a new thread, the counter circuit 405 can be incremented to reflect that five threads are currently operated by the SMT processor.
The counter circuit 405 can provide the number of threads currently operated by the SMT processor to a comparator circuit 410. A threshold value is provided to comparative circuit 410 along with the number of threads currently operated by the SMT processor. The threshold value can be a programmable value that indicates the number of threads beyond which the performance level is changed. Accordingly, when the number of threads currently operated by the SMT processor is less than or equal to the threshold value, the performance mode provided to the processing circuits can be maintained in a first performance level, such as a high power mode. However, when the number of threads currently operated by the SMT processor exceeds the threshold value, the perfommance level can be decreased so as to reduce the power dissipated by the SMT processor.
Figure 5 is a flow chart that illustrates operations of embodiments of performance level control circuits according to the invention. According to Figure 5, when the SMT processor is initialized, the number of threads currently operated by the SMT processor is zero (Block 500). As threads are created and completed in the SMT processor, the number of threads, N. currently operating in the SMT processor is incremented or decremented (Block 505). For example, in a case where four threads are operated by theSMT processor, the value of N would be four. When a new thread is created, the value of N is incremented to five, whereas if one of the threads subsequently completes, the value of N is decremented back to four.
The number of threads currently operating in the SMT processor is compared to a threshold value (Block 510). If the number of threads currently operated by the SMT processor is less than or equal to the threshold value, the performance level control circuit provides a first performance level to the processing circuits allocated to the threads (Block 515). For example, if a processing circuit allocated to the thread is the cache memory discussed in reference to Figure 2, the cache memory can operate so that the tag memory and the data memory are accessed concurrently (i.e., in high power mode). On the other hand, if the number of threads operated by the SMT processor is greater than the threshold value (Block 510), the performance level control circuit provides a second performance level to the processing circuits associated with threads (Block 520). For example, in the embodiments discussed above in reference to Figure 2, at the second performance level, the cache memory can operate such that the data memory is only accessed responsive to a hit in the tag memory (i.e., in low power mode).
Figure 6 is a block diagram that illustrates embodiments of a cache memory according to the invention as shown in Figure 2. According to Figure 6, a tag memory 610 is configured to store addresses of data stored in a data memory 620.
The tag memory 610 is accessed using an address that is associated with data to be acted on by the SMT processor. Entries in the tag memory 610 are compared with the address by a tag compare circuit 630 to determine whether the data needed by the SMT processor is stored in the data memory 620. If the tag compare circuit 630 determines that the tag memory 610 indicates that the required data is stored in the data memory 620, a tag hit occurs. Otherwise, a tag miss occurs. If a tag hit occurs, an output enable circuit 650 enables data to be output from the data memory 620.
According to embodiments of the invention, the performance level provided by the performance level control circuit is used to control how the tag memory 610 and the data memory 620 operate. In particular, if a first performance level is provided to the cache memory, a data memory enable circuit 640 enables the data memory 620 to be accessed concurrent with the tag memory 610 regardless of whether a tag hit occurs. In contrast, if a second performance level is provided to the cache memory, the data memory enable circuit 640 does not allow the data memory 620 to be accessed unless a tag hit occurs.
Therefore, in embodiments according to the invention, in a high power mode the tag memory 610 and the data memory 620 can be accessed concurrently to provide improved performance, whereas in a low power mode the data memory 620 is accessed only if the tag memory 610 indicates that a tag hit has occurred, thereby allowing the power dissipated by the cache memory to be reduced.
Figure 7 is a block diagram that illustrates embodiments according to the invention utilized in an instruction cache. According to Figure 7, the thread management circuit 700 allocates the instruction cache 722 to a new thread. The performance level control circuit included in the thread management circuit 300 can provide a performance level to the instruction cache 722 to control how the instruction cache 722 operates.
In particular, the instruction cache 722 can operate in a high power mode in response to the first performance level and can be configured to operate in a low power mode in response to a second performance level. As discussed above in reference to, for example, Figure 5, the first and second performance levels can be provided to the instruction cache 722 based on the number of threads that is currently operated by the SMT processor. Furthermore, the instruction cache 722 can operate at the different performance levels in similar ways to those described above in reference to Figure 6, wherein the data memory 620 is only accessed responsive to a tag hit in low power mode. For example, different performance levels may be provided in the instruction cache to allow direct addressing when successive memory accesses are determined to be to the same cache line. This type of restriction may be employed using a direct-addressed cache which can allow a read of the tag Random Access Memory (RAM) be avoided, which may also allow a tag compare to be eliminated. Furthermore, in direct-addressed caches a translation from a virtual to a physical address may also be avoided.
Figure 8 is a block diagram that illustrates embodiments of separate processing circuits having different performance levels according to the invention.
According to Figure 8, a first floating point circuit 805 can be configured to operate at a first performance level whereas a second floating point circuit 815 can be configured to operate at a second performance level that is lower than the first performance level. In other words, the first floating point circuit 805 can be for use in high power mode whereas the second floating point circuit 815 can be used in low power mode.
A first integer/load-store circuit 810 is configured to perform at the first performance level, whereas a second integer/load-store circuit 820 is configured to operate at the second performance level. A thread management circuit 800 is configured to provide two separate performance levels. In particular, the first performance level is provided to the first floating point circuit 805 and to the first integer/load-store circuit 810. The second performance level provided by the thread management circuit 800 is provided to the second floating point circuit 815 and to the second integer/load-store circuit 820. Accordingly, the first floating point circuit 805 and the first integer/load-store circuit 810 can be allocated to threads that operate at the first performance level, whereas the second floating point circuit 815 and the second integer/load-store circuit 820 can be allocated to threads that operate at the second performance level. It will be understood that the first and second performance levels can be provided by the thread managemerit circuit 800 either separately or concurrently. It will also be understood that more than two separate floating point circuits and integer/loadstore can be provided as can additional performance levels.
According to embodiments of the invention, the first performance level provided to the first floating point circuit 805 and the first integer/load-store circuit 810 can be provided when the number of threads operated in the SMT processor is less than or equal to a first threshold value. The second performance level can be provided to the second floating point circuit 815 and the second integer/load-store circuit 820 when the number of threads currently operated by the SMT processor exceeds the first threshold value. Accordingly, when the number of threads operated by the SMT processor exceeds the threshold value, all threads (both those previously existing and those newly created) can use the second floating point unit 815 and the second integer/load-store circuit 820 to reduce the power consumed by the SMT processor.
It will be understood that floating point circuits and integer/load-store circuits according to the invention may operate at different clock speeds and/or use different circuit types (such different types of CMOS devices) to provided the different performance levels. For example, in some embodiments according to the invention, a floating point circuit that is associated with the operation of a thread in the SMT processor can operate in one of a high power mode at a high clock speed or a low power mode at a lower clock speed based on the number of threads currently operated by the SMT processor.
Figure 9 is a block diagram that illustrates the embodiment of SMT processors including a plurality of processing circuits that are responsive to separate performance levels provided by a thread management circuit 900. In particular, the thread management circuit 900 provides three separate performance levels to an instruction cache 930, a data cache 965, first and second floating point circuits 905, 915, and first and second integer/load-store circuits 910, 920. It will be understood that the performance level provided to the first and second floating point circuits 905, 915 and to the first and second integer/load-store circuits 910, 920 can operate as discussed above in reference to Figure 8. Furthermore, the data cache 965 and the instruction cache 930 can operate as described above in reference to Figures 2 and 7, respectively Accordingly, the separate performance levels can be provided to the different processing circuits so that the processing circuits can operate at different performance levels thereby provided greater control over a tradeoffbetween performance and power consumption. For example, the instruction cache may operate at the first performance level while the data cache 265 and the first and second floating point circuits 905, 915, and first and second integer/load-store circuits 910, 920 operate at the second performance level. Other combinations of performance levels may also be used.
Figure 10 is a block diagram that Illustrates operations of embodiments of a performance level control circuit included in the thread management circuit 900 in Figure 9. In particular, the performance level control circuit includes a counter 1000 that is incremented and decremented in response to threads being created and completed in the SMT processor. First through third registers 1015, 1020, 1025, each can store a separate threshold value of a number of threads currently operating in the SMT processor. Three comparator circuits 1030, 1035, and 1040, are coupled to respective ones of the registers 1015, 1020, and 1025. In particular, the first register 1015 that stores the first threshold value is coupled to the first comparator circuit 1030.
The second register 1020 that stores the second threshold value is coupled to the second comparator circuit 1035. The third register 1025 that stores the third threshold value is coupled to the third comparator circuit 1040.
Each of the comparator circuits 1130, 1035, 1040 compares the number of threads currently operated by the SMT processor with the threshold value stored in the respective register. If the first comparator circuit 1030 determines that the current number of threads operated by the SMT processor is greater than the first threshold value in the first register 1015, the first comparator circuit 1130 generates a performance level 1045, which as shown in Figure 9, is coupled to the data cache 965.
Accordingly, when the number of threads operated by the SMT processor exceeds the threshold value in the first register 1015, the performance level of the data cache 965 is changed from the first performance level to the second performance level (i.e., from high power mode to low power mode).
If the second comparator circuit 1035 determines that the number of threads currently operated by the SMT processor exceeds the threshold value stored in the second register 1020, the second comparator circuit 1035 generates a performance level 1050 that is coupled to the instruction cache 930, thereby changing the performance level of the instruction cache 930 from the first performance level to the second performance level (i.e., from high power mode to low power mode).
If the third comparator circuit 1040 determines that the number of the threads currently operated by the SMT processor exceeds the threshold value stored in the third register 1025, the third comparator circuit 1040 generates a performance level 1055 that is coupled to the first and second floating point circuits 905, 915, and the first and second integer/load-store circuits 910, 920. Accordingly, the performance level of these processing circuits is also changed from the first performance level to the second performance level (i.e., from high power mode to low power mode). It will be understood that the performance level 1055 coupled to the floating point circuits and the integer/load-store circuits operate as discussed above in reference to Figure 8.
Figure 11 is a flow chart which illustrates method embodiments of the performance level control circuit illustrated in Figure 10. According to Figure 11, the number of threads currently operating in the SMT processor is equal to zero when the SMT processor is initialized (Block 1100). As threads are created and are completed by the SMT processor, the number of threads currently operated by the SMT processor is incremented and decremented to provide the number, N. that represents the number of threads that are currently operated by the SMT processor (Block 1105).
If the number of threads currently operated by the SMT processor is less than or equal to the first threshold value (Block 1110), all processing circuits continue to operate at the first (or high) performance level (Block 1115). On the other hand, if the number of threads currently operated by the SMT processor exceeds the first threshold value (Block 1110), the processing circuits that are coupled to the performance level 1045 begin to operate at the second performance level (i.e., low power mode) (Block 1120).
If the number of threads currently operated by the SMT processor is less than or equal to a second threshold value (Block 1125), the processing circuits that are coupled to the performance level 1050 (and to the performance level 1055) begin to (or continue to) operate at the first performance level while the processing circuits coupled to the performance level 1045 (as discussed above) continue to operate at the second performance level (Block 1130).
If the number of threads currently operated by the SMT processor exceeds the second threshold value (Block 1125), the processing circuits coupled to the performance level 1050 begin to (or continue to) operate at the second performance level (Block 1135) along with the processing circuits coupled to the performance level 1045, whereas the processing circuits coupled to the performance level 1055 continue to operate at the first performance level.
If the number of threads currently operated by the SMT processor is less than or equal to a third threshold value (Block 1140), the processing circuits coupled to the performance level 1055 continue to operate at the first performance level whereas the processing circuits coupled to the performance level 1045 and the performance level 1050 continue to operate at the second performance level (Block 1145). If the number of threads currently operated by the SMT processor exceeds the third threshold value (Block 1140), the processing circuits coupled to the performance level 5 begin to (or continue to) operate at the second performance level (i. e. , in low power mode) (Block 1150).
As discussed above, embodiments according to the invention can provide processing circuits that are associated with the operation of threads in an SMT processor wherein the processing circuits are configured to operate at different performance levels based on a number of threads currently operated by the SMT processor. For example, in some embodiments according to the invention, processing circuits, such as a floating point unit or a data cache, that are associated with the operation of a thread in the SMT processor can operate in one of a high power mode or a low power mode based on the number of threads currently operated by the SMT processor.
Furthermore, as the number of threads operated by the SMT operator increases, the performance levels of the processing circuits can be decreased, thereby providing the architectural benefits of the SMT processor while allowing a reduction in the amount of power consumed by the processing circuits associated with the threads.
For example, in some embodiments according to the invention, processing circuits according to the invention may operate at different clock speeds and/or use different circuit types (such different types of CMOS devices) to provided the different performance levels. For example, in some embodiments according to the invention, processing circuits, such as a floating point unit or a data cache, that are associated with the operation of a thread in the SMT processor can operate in one of a high power mode at a high clock speed or a low power mode at a lower clock speed based on the number of threads currently operated by the SMT processor.
Many alterations and modifications may be made by those having ordinary skill in the art, given the benefit of present disclosure, without departing from the spirit and scope of the invention. Therefore, it will be understood that the illustrated embodiments have been set forth only for the purposes of example, and that it should not be taken as limiting the invention as defined by the following claims. The following claims are, therefore, to be read to include not only the combination of elements which are literally set forth but all equivalent elements for performing substantially the same function in substantially the same way to obtain substantially the same result. The claims are thus to be understood to include what is specifically illustrated and described above, what is conceptually equivalent, and also what incorporates the essential idea of the invention.

Claims (3)

  1. CLAIMS: 1. A cache memory associated with a Simultaneous Multi-Threading
    (SMT) processor, the cache memory including a tag memory and a data memory accessed either concurrently or the data memory is accessed subsequent to the tag memory based on a number of threads currently operated by the SMT processor.
  2. 2. A cache memory according to Claim 1 wherein the tag memory and the data memory are accessed concurrently responsive to the number of threads currently operated by the SMT processor being less than or equal to a threshold value.
  3. 3. A cache memory according to Claim 1 wherein the data memory is accessed responsive to a hit in the tag memory responsive to the number of threads currently operated by the SMT processor being greater than a threshold value.
GB0508862A 2003-02-20 2004-02-19 Simultaneous multi-threading processor circuits and computer program products configured to operate at different performance levels Expired - Lifetime GB2410584B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
KR20030010759 2003-02-20
US10/631,601 US7152170B2 (en) 2003-02-20 2003-07-31 Simultaneous multi-threading processor circuits and computer program products configured to operate at different performance levels based on a number of operating threads and methods of operating
GB0403738A GB2398660B (en) 2003-02-20 2004-02-19 Simultaneous multi-threading processors operating at different performance levels

Publications (3)

Publication Number Publication Date
GB0508862D0 GB0508862D0 (en) 2005-06-08
GB2410584A true GB2410584A (en) 2005-08-03
GB2410584B GB2410584B (en) 2006-02-01

Family

ID=34743283

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0508862A Expired - Lifetime GB2410584B (en) 2003-02-20 2004-02-19 Simultaneous multi-threading processor circuits and computer program products configured to operate at different performance levels

Country Status (1)

Country Link
GB (1) GB2410584B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150347308A1 (en) * 2014-05-27 2015-12-03 Qualcomm Incorporated Reconfigurable fetch pipeline

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0768608A2 (en) * 1995-10-13 1997-04-16 Sun Microsystems, Inc. Maximal concurrent lookup cache for computing systems having a multi-threaded environment
US5717892A (en) * 1995-01-17 1998-02-10 Advanced Risc Machines Limited Selectively operable cache memory
WO2001048599A1 (en) * 1999-12-28 2001-07-05 Intel Corporation Method and apparatus for managing resources in a multithreaded processor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5717892A (en) * 1995-01-17 1998-02-10 Advanced Risc Machines Limited Selectively operable cache memory
EP0768608A2 (en) * 1995-10-13 1997-04-16 Sun Microsystems, Inc. Maximal concurrent lookup cache for computing systems having a multi-threaded environment
WO2001048599A1 (en) * 1999-12-28 2001-07-05 Intel Corporation Method and apparatus for managing resources in a multithreaded processor

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150347308A1 (en) * 2014-05-27 2015-12-03 Qualcomm Incorporated Reconfigurable fetch pipeline
WO2015183467A1 (en) * 2014-05-27 2015-12-03 Qualcomm Incorporated Method and apparatus for cache access mode selection
US9529727B2 (en) * 2014-05-27 2016-12-27 Qualcomm Incorporated Reconfigurable fetch pipeline
JP2017517065A (en) * 2014-05-27 2017-06-22 クアルコム,インコーポレイテッド Reconfigurable fetch pipeline
KR101757355B1 (en) 2014-05-27 2017-07-12 퀄컴 인코포레이티드 Method and apparatus for cache access mode selection
US10007613B2 (en) * 2014-05-27 2018-06-26 Qualcomm Incorporated Reconfigurable fetch pipeline
EP3629184A1 (en) * 2014-05-27 2020-04-01 Qualcomm Incorporated Method and apparatus for cache access mode selection

Also Published As

Publication number Publication date
GB2410584B (en) 2006-02-01
GB0508862D0 (en) 2005-06-08

Similar Documents

Publication Publication Date Title
US7152170B2 (en) Simultaneous multi-threading processor circuits and computer program products configured to operate at different performance levels based on a number of operating threads and methods of operating
KR101136610B1 (en) Sequencer address management
US8219993B2 (en) Frequency scaling of processing unit based on aggregate thread CPI metric
US8825958B2 (en) High-performance cache system and method
US8806177B2 (en) Prefetch engine based translation prefetching
US7240164B2 (en) Folding for a multi-threaded network processor
KR100936601B1 (en) Multi-processor system
EP3716065A1 (en) Apparatus, method, and system for ensuring quality of service for multi-threading processor cores
WO2010144832A1 (en) Partitioned replacement for cache memory
US20190171462A1 (en) Processing core having shared front end unit
US5860101A (en) Scalable symmetric multiprocessor data-processing system with data allocation among private caches and segments of system memory
US20230409485A1 (en) Flexible cache allocation technology priority-based cache line eviction algorithm
GB2398660A (en) Simultaneous multi-threading processor configured to operate at different performance levels based on the number of operating threads
US10771554B2 (en) Cloud scaling with non-blocking non-spinning cross-domain event synchronization and data communication
US8266379B2 (en) Multithreaded processor with multiple caches
US9069564B1 (en) Weighted instruction count scheduling
JP2010061642A (en) Technique for scheduling threads
GB2410584A (en) A simultaneous multi-threading processor accessing a cache in different power modes according to a number of threads
CN112395000B (en) Data preloading method and instruction processing device
EP3330848B1 (en) Detection of stack overflow in a multithreaded processor
US20240004808A1 (en) Optimized prioritization of memory accesses
CN111124494B (en) Method and circuit for accelerating unconditional jump in CPU

Legal Events

Date Code Title Description
PE20 Patent expired after termination of 20 years

Expiry date: 20240218