GB2536825A - Power efficient processor architecture - Google Patents

Power efficient processor architecture Download PDF

Info

Publication number
GB2536825A
GB2536825A GB1609345.2A GB201609345A GB2536825A GB 2536825 A GB2536825 A GB 2536825A GB 201609345 A GB201609345 A GB 201609345A GB 2536825 A GB2536825 A GB 2536825A
Authority
GB
United Kingdom
Prior art keywords
cores
core
interrupt
logic
mobile device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1609345.2A
Other versions
GB2536825B (en
GB201609345D0 (en
Inventor
Srinivasan Sadagopan
Moses Jaideep
J Herdrich Andrew
G Illikkal Rameshkumar
Iyer Ravishankar
Makineni Srihari
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Priority to GB1609345.2A priority Critical patent/GB2536825B/en
Priority claimed from GB1402807.0A external-priority patent/GB2507696B/en
Publication of GB201609345D0 publication Critical patent/GB201609345D0/en
Publication of GB2536825A publication Critical patent/GB2536825A/en
Application granted granted Critical
Publication of GB2536825B publication Critical patent/GB2536825B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/3293Power saving characterised by the action undertaken by switching to a less power-consuming processor, e.g. sub-CPU
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/3243Power saving in microcontroller unit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/329Power saving characterised by the action undertaken by task scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5094Allocation of resources, e.g. of the central processing unit [CPU] where the allocation takes into account power or heat criteria
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Power Sources (AREA)

Abstract

A mobile device comprising an apparatus, with a DRAM coupled to said apparatus and data storage, the apparatus comprising: cryptographic and video accelerators; a memory controller; and a processor 100, comprising: first 110 and second 120 plurality of cores; wherein the second plurality of cores is heterogeneous to and have lower power consumption than the first; an interconnect to couple the first cores, the second cores, and a shared cache memory coupled to at least the first cores; and a logic to cause a core of the second plurality of cores to execute an operation, wherein based at least in part on performance level of the core of the second cores, the logic is to cause an execution state of the core of the second cores to be transferred to a core of the first plurality of cores to enable the core of the first cores to execute the operation. Also disclosed is a method and a computer readable storage medium with instructions for a system which, when executed, causes the processor of a mobile device to execute the operation defined previously.

Description

POWER EFFICIENT PROCESSOR ARCHTTECTURE
Background
Typically, a processor uses a power saving sleep mode such as in accordance with an Advanced Configuration and Power Interface (ACPI) standard (e.g., Rev. 3.0b, published October 10, 2006) when possible. These so-called C-state core low power states (ACPT C-states) in addition to voltage and frequency scaling (DVFS or ACPT performance state (P-states)) can save power when a core is idle or not fully utilized. However, even in a multi-core processor context, a core is often woken from an efficient sleep state to perform a relatively simple operation, and is then returned to the sleep state. This operation can adversely affect power efficiency, as there is a cost in both latency and power consumption for exiting and returning to low power states. During the state transition power may be consumed in some types of processors without useful work being accomplished, to the detriment of power efficiency. Examples of operations to be handled upon exiting a low power state include keyboard inputs, timer interrupts, network interrupts and so on. To handle these operations in a power sensitive manner, current operating systems (OSs) change program behavior by processing larger amounts of data at a time, or moving to a tickless OS where there are no periodic timer interrupts, and only sporadic programmed ones. Another strategy is to use timer coalescing, where multiple interrupts are grouped and handled at the same time. But in addition to changing a program's behavior, all of these options raise complexity and still can lead to power inefficient operation. Further, some types of software (e.g., media playback) may make attempts to defeat hardware power efficiency mechanisms by requesting frequent, periodic wakes regardless of how much work needs to be accomplished. Thus, the tickless/timer coalescing strategies can save some power by reducing unnecessary wakes from deep C-states, but they require invasive changes to the OS and may take a significant amount of time to propagate through a computing ecosystem, as such changes are not implemented until a new version of an operating system is distributed.
Brief Description of the Drawings
FIG. 1 is a block diagram of a processor in accordance with one embodiment of the present invention.
FTG. 2 is a block diagram of a processor in accordance with another embodiment of the present invention.
FTG. 3 is a flow diagram of resume flow options between cores in accordance with one embodiment of the present invention.
FIG. 4 is a flow diagram of a method in accordance with an embodiment of the present invention.
FTG. 5 is a flow diagram of a method for transferring execution state in accordance with an embodiment of the present invention.
FTG. 6 is a block diagram of a processor in accordance with yet another embodiment of the present invention.
FIG. 7 is a block diagram of a processor in accordance with a still further embodiment of the present invention.
FTG. 8 is a block diagram of a processor in accordance with yet another embodiment of the present invention.
FIG. 9 is a timing diagram in accordance with an embodiment of the present invention.
FIG. 10 is a graphical illustration of power savings in accordance with an embodiment of the present invention.
FIG. 11 is a block diagram of a system in accordance with an embodiment of the present invention.
Detailed Description
In various embodiments, average power consumption can be reduced in a heterogeneous processor environment. This heterogeneous environment may include large fast cores and smaller more power-efficient cores that are combined for system and power efficiency reasons. Further still, embodiments may provide this power control in a manner that is transparent to an operating system (OS) executing on the processor. However, the scope of the present invention is not limited to heterogeneous environments, and can also be used in homogenous environments (from an OS-transparent but not necessarily hardware-heterogeneous perspective) to reduce average power (e.g., to keep as many cores asleep in a multiprocessor environment as possible). Embodiments may be especially suitable in hardware-accelerated environments such as tablet computer-based and system-on-chip (SoC) architectures where the cores are often asleep.
In general, embodiments may provide for power control by steering all wakeup signals to a smaller core rather than a larger core. In this way, it is possible to reduce average power by well over two times when the system is 95°A idle. As will be described, in many embodiments this smaller core can be sequestered from the OS. That is, the presence of this smaller core is unknown to the OS, and this core is thus invisible to the OS. As such, embodiments can provide for power efficient processor operation via processor hardware in a manner that is transparent to the OS and applications executing on the processor.
Referring now to FIG. 1, shown is a block diagram of a processor in accordance with one embodiment of the present invention. As seen in FIG. 1, processor 100 may be a heterogeneous processor having a number of large cores, small cores and accelerators. Although described herein in the context of a multi-core processor, understand embodiments are not so limited and in implementations may be within a SoC or other semiconductor-based processing devices. Note that the accelerators can perform work whether the processor cores are powered up or not, based on a queue of input work. In the embodiment of FIG. 1, processor 100 includes a plurality of large cores. In the specific embodiment shown, two such cores 110a and 110b (generally, large cores 110) are shown, although understand that more than two such large cores may be provided. In various implementations, these large cores may be out-of-order processors having a relatively complex pipelined architecture and operating in accordance with a complex instruction set computing (CISC) architecture.
In addition, processor 100 further includes a plurality of small cores 120a-120n (generally, small cores 120). Although 8 such cores are shown in the embodiment of FIG. 1, understand the scope of the present invention is not limited in this aspect. In various embodiments, small cores 120 may be power efficient in-order processors, e.g., to execute instructions according to a CTSC or a reduced instruction set computing (RISC) architecture. In some implementations, two or more of these cores may be coupled together in series to perform related processing, e.g., if several large cores are in power-saving states then one or more smaller cores may be active to perform work that would otherwise wake the large cores. In many embodiments, small cores 120 can be transparent to an OS, although in other embodiments the small and large cores may be exposed to the OS, with configuration options available. In general, any core mix between large and small cores can be used in different embodiments. For example, a single small core can be provided per large core, or in other embodiments a single small core may be associated with multiple large cores.
As used herein, the term "large core" may be a processor core that is of a relatively complex design and which may consume a relatively large amount of chip real estate as compared to a "small core," which may be of a lesser complexity design and consume a correspondingly smaller amount of chip real estate. In addition, the smaller cores are more power efficient than the larger cores, as they may have a smaller thermal design power (TDP) than the larger cores. However, understand that the smaller cores may be limited in their processing capabilities as compared to the large cores. For example, these smaller cores may not handle all operations that are possible in the large cores. And in addition, it is possible that the smaller cores can be less efficient in instruction processing. That is, instructions may be performed more rapidly in the large cores than the small cores.
As further seen, both large cores 110 and small cores 120 may be coupled to an interconnect 130. Different implementations of this interconnect structure can be realized in different embodiments. For example, in some embodiments the interconnect structure can be according to a front side bus (FSB) architecture or an Intel® Quick Path Interconnect (QPT) protocol. In other embodiments, the interconnect structure can be according to a given system fabric.
Still referring to FIG. 1, multiple accelerators 140a-140c also may be coupled to interconnect 130. Although the scope of the present invention is not limited in this regard, the accelerators may include media processors such as audio and/or video processors, cryptographic processors, fixed function units and so forth. These accelerators may be designed by the same designers that designed the cores, or can be independent third party intellectual property (IP) blocks incorporated into the processor. In general, dedicated processing tasks can be performed in these accelerators more efficiently than they can be performed on either the large cores or the small cores, whether in terms of performance or power consumption. Although shown with this particular implementation in the embodiment of FIG. 1, understand the scope of the present invention is not limited in this regard. For example, instead of having only two types of cores, namely a large core and a small core, other embodiments may have multiple hierarchies of cores, including at least a large core, a medium core and a small core, with the medium core having a larger chip real estate than the small core but a smaller chip real estate than the large core and corresponding power consumption between that of the large core and the small core. In still other embodiments, the small core can be embedded within a larger core, e.g., as a subset of the logic and structures of the larger core.
Furthermore, while shown in the embodiment of FIG. 1 as including multiple large cores and multiple small cores, it is possible that for certain implementations such as a mobile processor or SoC, only a single large core and a single small core may be provided. Specifically referring now to FIG. 2, shown is a block diagram of a processor in accordance with another embodiment of the present invention in which processor 100' includes a single large core 110 and a single small core 120, along with interconnect 130 and accelerators 140a-c. As mentioned, this implementation may be suitable for mobile applications As example power figures for a typical large core, power consumption may be on the order of approximately 6000 milliwatts (mW), while for a medium core power consumption may be on the order of approximately 500 mW, and for a very small core power consumption may be on the order of approximately 15 mW. In an implementation that avoids waking the large core, significant power benefits may be achieved.
Embodiments allow the larger, less power-efficient cores to remain in low power sleep states longer than they otherwise would be able to. By steering interrupts and other core waking events to the smaller cores instead of the larger cores, the smaller cores may run longer and wake more often, but this is still more power efficient than waking a large core to perform a trivial task such as data moving. Note that as described below for some operations, the large core may be powered on for execution, as for instance smaller cores might not support vector operations (e.g., AVX operations), complex addressing modes or floating point (FP) operations. In such cases a wake signal could be re-routed from the small core to the large core.
For example, while performing hardware-accelerated 1080p video playback on a processor, over 1000 transitions into and out of core C6 state and nearly 1200 interrupts occur each second. If even a portion of these wake events are re-steered to a smaller core using an embodiment of the present invention, significant power savings can be achieved.
FIG. 3 summarizes resume flow options between cores in accordance with one embodiment of the present invention. As seen in FIG. 3, a software domain 210 and a hardware domain 220 are present. In general, software domain 210 corresponds to OS operations with regard to power management, e.g., according to an ACPI implementation. In general, the OS, based on its knowledge of upcoming tasks according to its scheduling mechanism, can select one of multiple C-states to request the processor to enter into a low power mode. For example, an OS can issue an 1VIIVAIT call which includes a particular low-power state that is being requested.
In general, CO corresponds to a normal operating state in which instructions are executed, while states C1-C3 arc OS lower power states, each having a different level of power savings and a corresponding different level of latency to return to the CO state. As seen, depending on an expected workload of the processor, the OS may select a non-idle state, e.g., OS CO or one of multiple idle states, e.g., OS C-states CI-C3. Each of these idle states can be mapped to a corresponding hardware low power state that is under control of processor hardware. Thus processor hardware can map a given OS C-state to a corresponding hardware C-state, which may provide for greater power savings than that dictated by the OS. In general, lighter C-states (e.g., Cl) save less power but have lower resume times than deeper C-states (e.g. C3). In various embodiments, hardware domain 220 and the mapping of OS C-states to processor C-states can be performed by a power control unit (PCU) of the processor, although the scope of the present invention is not limited in this regard. This mapping may be based on a prior history of OS-based power management requests. Also, the decision can be based on a status of the overall system, configuration information and so forth.
In addition, the PCU or other processor logic may be configured to direct all wake events to a smallest available core (which may be an OS invisible core, in various embodiments). As seen in FIG. 3, upon exit from a given hardware-based idle state, control resumes directly to the smallest available core such that the state is transferred to this smallest core. In contrast, in a conventional hardware/software resumption, control returns only to the large core. Generally an OS selects a C-state based on the expected idle time and resume latency requirements, which the architecture maps to a hardware C-state. Thus as seen in the embodiment of FIG. 3 all resume signals (such as interrupts) are routed to the smallest available core, which determines whether it can handle the resume operation, or instead is to send a wake signal to a larger core to continue.
Note that embodiments do not interfere with existing P-states or C-state auto-demotion where the hardware selects a hardware C-state with lower resume latency automatically based on measured experimental efficiency. Note that it is also possible that the PCU or another programmable entity may examine incoming wake events to determine which core (large or small) to route them to.
As described above, in some implementations, the small core itself can be hidden from the OS and application software. For example, a small-large core pair can be abstracted and hidden from application software. In a low power state all cores can be asleep while an accelerator (such as a video decode accelerator) performs a given task such as a decoding task. When the accelerator runs out of data, it directs a wake signal to request additional data that can be from the small core, which wakes and determines that this simple data move operation can be accomplished without waking the large core, thus saving power. if a timer interrupt arrives and the small core wakes up and instead detects that a complex vector operation (like a 256-bit AVX instruction) exists in the instruction stream, the large core may be awakened to handle the complex instruction (and other instructions in this stream) to enable reduced latency. In an alternate implementation a global hardware observation mechanism which can be located in the PCU or another uncore location near the PCU, or as a separate section of hardware logic on the global interconnect, or as an addition to the internal control logic of the small core, can detect that the small core encounters the AVX instruction and may generate an undefined instruction fault, which may cause a shut down of the small core and re-steer the instruction stream to the larger core after waking it. Note that this behavior may extend beyond instructions to configuration or features. If the small core encounters a write to a configuration space that only exists on the large core, for instance, it may request a wake of the large core.
Referring now to FIG. 4, shown is a flow diagram of a method in accordance with an embodiment of the present invention. Note that the method of FIG. 4 may be performed by various agents, depending upon a given implementation. For example, in some embodiments method 300 may be implemented in part by system agent circuitry within a processor such as a power control unit, which can be in a system agent or uncore portion of a processor. In other embodiments, method 300 may be implemented in part by interconnect logic such as power control logic within an interconnect structure that can receive inten-upts, e.g., from accelerators coupled to the interconnect structure and forward the interrupts to a selected location.
As seen in FIG. 4, method 300 may begin by placing both large and small cores in a sleep state (block 310). That is, it is assumed that no active operations are being performed in the cores. As such, they can be placed in a selected low power state to reduce power consumption.
Although the cores may not be active, other agents within a processor or SoC such as one or more accelerators may be performing tasks. At block 320, an interrupt may be received from such an accelerator. This interrupt may be sent when the accelerator has completed a task, encountered an error, or when the accelerator needs additional data or other processing is to be performed by another component such as a given core. Control passes next to block 330 where the logic can send a resume signal directly to the small core. That is, the logic may be programmed to always send a resume signal to the small core (or a selected one of multiple such small cores, depending upon system implementation) when both large and small cores are in a low power state. By sending interrupts directly and always to the small core, greater power consumption by the large core can be avoided for the many instances of interrupts for which the small core can handle the requested operation. Note that certain types of filtering or caching mechanisms may be added to block 330 such that certain interrupt sources are always routed to one core or another, as desired to balance performance and power.
Referring still to FIG. 4, control next passes to diamond 340 where it can be determined whether the small core can handle a request associated with the interrupt. Although the scope of the present invention is not limited in this regard, in some embodiments this determination may be done in the small core itself, after it is awoken. Or the logic that performs the method of FIG. 4 can perform the determination (and in which case it is possible for this analysis to be done prior to sending the resume signal to the small core).
As an example, the small core may determine whether it can handle the requested operation based on performance requirements and/or instruction set architecture (ISA) capabilities of the small core. If the small core cannot handle a requested operation because it does not have ISA support, front end logic of the small core can parse a received instruction stream and determine that at least one instruction in the stream is not supported by the small core. Accordingly, the small core may issue an undefined instruction fault. This undefined fault may be sent to the PCU (or another entity), which can analyze the fault and the state of the small core to determine whether the undefined fault is as a result of the small core not having hardware support for handling instruction, or if instead it is a true undefined fault. In the latter case, the undefined fault may be forwarded to an OS for further handling. If the fault is due to the small core not having the appropriate hardware support for handling the instruction, the PCU can cause the execution state transferred to this small core to be transferred to a corresponding large core to handle the requested instruction(s).
In other embodiments, a transfer of the execution state between small core and large core may occur when it is determined that the small core has been executing for too long a time or with too low a performance level. That is, assume that the small core has been executing for many thousands or millions of processor cycles to perform requested tasks. Because of the more expedient execution available in the large core, it is possible that greater power reductions can occur by transferring the state to the large core to enable the large core to more rapidly conclude the task.
Still referring to FIG. 4 if it is determined that the requested operation can be handled in the small core, control passes to block 350 where the operation is thus performed in the small core. For example, assume that the requested operation is a data move operation, the small core can perform the requested processing and if no other tasks are pending for the small core, it again can be placed into a low power state.
If instead it is determined at diamond 340 that the small core cannot handle the requested operation, e.g., if the operation is a relatively complex operation that the small core is not configured to handle, control instead passes to block 360. There, a wakeup signal can be sent, e.g., directly from the small core to the large core, to cause the large core to be powered up. Accordingly, control passes to block 370 where the requested operation can thus be performed in the large core. Note that although described with this particular set of operations in the embodiment of FIG. 4, understand the scope of the present invention is not limited in this regard.
Thus in various embodiments, a mechanism may be provided to allow hardware interrupts and other wake signals to be routed directly to the small core, without waking the large core. Note that in different implementations, the small core itself or a supervisory agent can determine whether the wake signal and processing can be completed without waking the large core. In representative cases, the smaller core may be much more power efficient than the larger cores, and may as a result support only a subset of the instructions that the large core supports. And, many operations to be performed upon waking from a low power state can be offloaded to a simpler, more power-efficient core to avoid waking a larger more powerful core in heterogeneous environments (where many cores of various sizes are included in a system for performance or power efficiency reasons).
Referring now to FIG. 5, shown is a flow diagram of a method for transferring execution state in accordance with an embodiment of the present invention. As shown in FIG. 5, method 380 may be performed by logic of a PCU, in one embodiment. This logic may be triggered responsive to a request to place a large core into a low power state. Responsive to such request, method 380 may begin at block 382 where the execution state of the large core can be stored in a temporary storage area. Note that this temporary storage area may be a dedicated state save area associated with the core or it can be within a shared cache such as a last level cache (LLC). Although the scope of the present invention is not limited in this regard, the execution state can include general-purpose registers, status and configuration registers, execution flags and so forth.
In addition, at this time additional operations to enable the large core to be placed into a low power state can be performed. Such operations include flushing of the internal caches and other state as well as signaling for shutdown of the given core.
Still referring to FTG. 5, it can then be determined whether the small core has resumed (diamond 384). This resumption may occur as a result of a resume signal received responsive to an interrupt coming from, e.g., an accelerator of the processor. As part of the small core resumption, control passes to block 386 where at least a portion of the large core state can be extracted from the temporary storage area. More specifically, this extracted portion may be that portion of the large core's execution state that is to be used by the small core. As examples, this state portion may include the main register contents, various flags such as certain execution flags, machine status registers and so forth. However, certain state may not be extracted, such as state associated with one or more execution units present in the large core that do not have corresponding execution units in the small core. This extracted portion of the state can then be sent to the small core (block 388), thus enabling the small core to perform whatever operations arc appropriate responsive to the given interrupt. Although shown with this particular implementation in the embodiment of FIG. 5, understand the scope of the present invention is not limited in this regard.
Referring now to FIG. 6, shown is a block diagram of a processor in accordance with an embodiment of the present invention. As shown in FIG. 6, processor 400 may be a multicore processor including a first plurality of cores 4101-410n that can be exposed to an OS, and a second plurality of cores 410a-x that arc transparent to the OS.
As seen, the various cores may be coupled via an interconnect 415 to a system agent or uncorc 420 that includes various components. As seen, the uncore 420 may include a shared cache 430 which may be a last level cache. In addition, the uncore may include an integrated memory controller 440, various interfaces 450a-n, power control unit 455, and an advanced programmable interrupt controller (APIC) 465.
PCU 450 may include various logic to enable power efficient operation in accordance with an embodiment of the present invention. As seen, PCU 450 can include wakcup logic 452 that can perfoim wakeups as described above. Thus logic 452 can be configured to always wake a small core first. However, this logic can be configured dynamically to not perform such small core direct wakeups in certain circumstances. For example, a system can be dynamically configured for power saving operations, e.g., when the system is a mobile system running on a battery. Tit such circumstances, the logic can be configured to always wake the small core. Tnstead, if the system is a server system, desktop or laptop system that is connected to wall power, embodiments may provide for a user-based selection to select latency and performance over power savings. Thus wakeup logic 452 can be configured in such instances to wake up a large core rather than a small core responsive to an interrupt. Similar wakeups of the large core can be performed when it has been determined that a large number of small core wakeups result in a redirection to a large core.
To further enable power efficient operation, PCU 450 may further include a state transfer logic 454 that can perform transfers of execution state between large and small cores. As discussed above, this logic may be used to take a large core's execution state stored into a temporary storage during a low power state, and extract at least a portion of that state to provide to a small core upon a small core wakeup.
Further still, PCU 450 may include an interrupt history storage 456. Such storage may include a plurality of entries each identifying an interrupt that has occurred during system operation and whether the interrupt was successfully handled by the small core. Then based on this history, when a given interrupt is received, a corresponding entry of this storage can be accessed to determine whether a previous interrupt of the same type was successfully handled by the small core. If so, the PCU can direct the new incoming interrupt to the same small core.
Instead if it is determined based on this history that this type of interrupt was not successfully handled by small core (or with unsatisfactorily low performance), the interrupt can instead be sent to a large core.
Still referring to FIG. 6, PCU 450 may further include an undefined handling logic 458. Such logic may receive undefined faults issued by a small core. Based on this logic, information in the small core can be accessed. Then it can be determined whether the undefined fault is as a result of a lack of support for the instruction in the small core or for another reason. Responsive to this determination, the logic can either cause the small core's state to be merged with the remaining part of the large core execution state (stored in a temporary storage area) and thereafter sent to the large core for handling of the interrupt, or send the undefined fault to an OS for further handling. When it is determined that a small core cannot handle the interrupt, the portion of the execution state provided to the small core is thus taken from the small core and saved back to the temporary storage location and accordingly, the small core can be powered down. This merged state along with the remaining execution state of the large core can then be provided back to the large core to enable the large core to handle an interrupt that the small core could not handle. Note also that an entry in interrupt history storage 456 can be written responsive to such mishandling by the small core. Although shown with this particular logic in the embodiment of FIG. 6, understand the scope of the present invention is not limited in this regard. For example, the various logics of PCU 450 can be implemented in a single logic block in other embodiments.
APIC 465 may receive various interrupts, e.g., issued from accelerators and direct the interrupts as appropriate to a given one or more cores. In some embodiments, to maintain the small cores as hidden to the OS, APIC 465 may dynamically remap incoming interrupts, each of which may include an APIC identifier associated with it, from an APIC TD associated with a large core to an APIC ID associated with a small core.
With further reference to FIG. 6, processor 400 may communicate with a system memory 460, e.g., via a memory bus. In addition, by interfaces 450, connection can be made to various off-chip components such as peripheral devices, mass storage and so forth. While shown with this particular implementation in the embodiment of FIG. 6, the scope of the present invention is not limited in this regard.
Note that various architectures are possible to enable different coupling or integration of the large and small cores. As examples, the degree of coupling between these disparate cores can depend on a variety of engineering optimization parameters related to die area, power, performance and responsiveness.
Referring now to FIG. 7, shown is a block diagram of a processor in accordance with another embodiment of the present invention. As shown in FIG. 7, processor 500 may be a true heterogeneous processor including a large core 510 and a small core 520. As seen, each processor may be associated with its own private cache memory hierarchy, namely cache memories 515 and 525 which may include both level 1 and level 2 cache memories. In turn, the cores may be coupled together via a ring interconnect 530. Multiple accelerators 540a and 540b and a LLC, namely an L3 cache 550 which may be a shared cache arc also coupled to the ring interconnect. In this implementation, execution state between the two cores may be transferred via ring interconnect 530. As described above, the execution state of the large core 500 can be stored in cache 550 prior to entry into a given low power state. Then upon wakcup of small core 520, at least a subset of this execution state can be provided to the small core to ready the core for execution of an operation that triggered its wakeup. Thus in the embodiment of FIG. 7, the cores are loosely coupled via this ring interconnect. Although shown for ease of illustration with a single large core and a single small core, understand the scope of the prcscnt invention is not limited in this regard. Using an implementation such as that of FIG. 7, any state or communication to be exchanged can be handled either via the ring architecture (which may also be a bus or fabric architecture). Or, in other embodiments this communication may be via a dedicated bus between the two cores (not shown in FTG. 7).
Referring now to FIG. 8, shown is a block diagram of a processor in accordance with yet another embodiment of the present invention. As shown in FTG. 8, processor 500' may be a hybrid heterogeneous processor in which there is tight coupling or integration between the large and small cores. Specifically as seen in FIG. 8 large core 510 and small core 520 may share a shared cache memory 518, which in various embodiments may include both level 1 and level 2 caches. As such, execution state can be transferred from one of the cores to the other via this cache memory, thus avoiding the latency of communication via ring interconnect 530. Note that this arrangement allows for lower power due to reduced data movement overheads and faster communication between the cores, but may not be as flexible.
It should be noted that FIGS. 7 and 8 only illustrate two possible implementations (and only show limited numbers of cores). More implementation varieties are possible, including different arrangements of cores, a combination of the two schemes, more than two types of cores, etc. It is also possible that in a variant of FTG. 8 the two cores may share some components like execution units, an instruction pointer or a register file.
As discussed, embodiments can be completely transparent and invisible to the operating system, and thus no software changes and only minimal increases in resume time from C-states can be achieved. In other embodiments, the presence and availability of small cores can be exposed to the OS to thus enable the OS to make decisions whether to provide an interrupt to a small core or a large core. Furthermore, embodiments may provide mechanisms in system software such as a basic input output system (BIOS) to expose thc large and small cores to the OS, or to configure whether the small cores are exposed or not. Embodiments may increase apparent resume times from C-states, but this is acceptable as current platforms vary in resume latencies, and currently no useful work is done during the time a core's state is being restored.
The ratio of how different small and large cores are may vary from insignificant differences to major microarchitectural structural differences. According to various embodiments, the most primary differentiators between the heterogeneous cores may be the die area and power consumed by the cores.
In some implementations, a control mechanism may be provided such that if it is detected that the large core is woken most of the time upon resume, waking of the small core first may be bypassed, and the large core can be directly woken, at least for a predetermined period of time to preserve performance. Note that in some embodiments a mechanism to universally re-steer all interrupts and other wake signals to either the small or large core can be exposed to software, both system and user-level software, depending on the power and performance requirements of the application and system. As one such example, a user-level instruction may be provided to perform the steering of wakeup operations to a specified core. Such instruction may be a variant of an MWATT-like instruction.
In some embodiments, an accelerator can send a hint to the PCU or other management agent with an interrupt to indicate that the requested operation is a relatively simple operation such that it can be handled effectively in the small core. This accelerator-provided hint may be used by the PCU to automatically direct incoming interrupts to the small core for handling. Referring now to FIG. 9, shown is a timing diagram illustrating operations occurring in a large core 710 and a small core 720 in accordance with an embodiment of the present invention. As seen, a longer sleep duration for large core 710 can be enabled by allowing a device interrupt to be provided to small core 720 directly, and determining in the small core whether it can handle the interrupt. If so, large core 710 can remain in a sleep state and the interrupt handled on small core 720.
Referring now to FIG. 10, shown is a graphical illustration of power savings in accordance with an embodiment of the present invention. As shown in FTG. 10, in a conventional system that has transitions from an active CO state to a deep low power state, e.g., a C6 state, core power consumption of a large core can vary from a relatively high level, e.g. 500 mW during every entry into the CO states to a zero power consumption level in the C6 state (middle view). Instead in an embodiment of the present invention (bottom view), wakeups into a CO state can be directed away from the large core and to a small core and thus, rather than the 500 mW power consumption level, the small cores can handle CO states at a much lower power level, e.g., 10 mW in the embodiment of FIG. 10.
Embodiments may be implemented in many different system types. Referring now to FIG. 11, shown is a block diagram of a system in accordance with an embodiment of the present invention. As shown in FIG. 11, multiprocessor system 600 is a point-to-point interconnect system, and includes a first processor 670 and a second processor 680 coupled via a point-to-point interconnect 650. As shown in FIG. 11, each of processors 670 and 680 may be multicorc processors, including first and second processor cores (i.e., processor cores 674a and 674b and processor cores 684a and 684b), although potentially many more cores may be present in the processors. More specifically, each of the processors can include a mix of large, small (and possibly medium) cores, accelerators and so forth, in addition to logic to direct wakeups to the smallest available core, when at least the large cores are in a low power state, as described herein.
Still referring to FIG. 11, first processor 670 further includes a memory controller hub (MCH) 672 and point-to-point (P-P) interfaces 676 and 678. Similarly, second processor 680 includes a MCH 682 and P-P interfaces 686 and 688. As shown in FIG. 11, MCH's 672 and 682 couple the processors to respective memories, namely a memory 632 and a memory 634, which may be portions of system memory (e.g., DRAM) locally attached to the respective processors. First processor 670 and second processor 680 may be coupled to a chipset 690 via P-P interconnects 652 and 654, respectively. As shown in FIG. 11, chipset 690 includes P-P interfaces 694 and 698.
Furthermore, chipset 690 includes an interface 692 to couple chipset 690 with a high performance graphics engine 638, by a P-P interconnect 639. In turn, chipset 690 may be coupled to a first bus 616 via an interface 696. As shown in FIG. 11, various input/output (T/0) devices 614 may be coupled to first bus 616, along with a bus bridge 618 which couples first bus 616 to a second bus 620. Various devices may be coupled to second bus 620 including, for example, a keyboard/mouse 622, communication devices 626 and a data storage unit 628 such as a disk drive or other mass storage device which may include code 630, in one embodiment. Further, an audio I/O 624 may be coupled to second bus 620. Embodiments can be incorporated into other types of systems including mobile devices such as a smart cellular telephone, tablet computer, netbook, or so forth.
Embodiments may be implemented in code and may be stored on a non-transitory storage medium having stored thereon instructions which can be used to program a system to perform the instructions. The storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, solid state drives (SSDs), compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.
While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention.
Clause 1. An apparatus comprising: a first core to execute instructions; a second core to execute instructions, the second core heterogeneous to and smaller than the first core; and a logic to cause the second core and not the first core to be woken responsive to an interrupt when the first and second cores are in a low power state.
Clause 2. The apparatus of clause 1 wherein the logic is to always cause the second core and not the first core to be woken responsive to the interrupt.
Clause 3. The apparatus of clause I wherein the logic is to provide a subset of an execution state of the first core to the second core responsive to the interrupt.
Clause 4. The apparatus of clause 3 wherein the second core is to determine whether the second core can handle the interrupt, and if not, to cause a wakeup signal to be sent the first core.
Clause 5. The apparatus of clause 4 wherein responsive to the determination that the second core cannot handle the interrupt, the logic is to obtain the subset of the execution state of the first core from the second core and to merge the execution state subset with a remainder of the execution state of the first core stored in a temporary storage area.
Clause 6. The apparatus of clause I wherein the apparatus comprises a multicore processor including the first and second cores and a power control unit (PCU), the PCU including the logic, the logic comprising: a wakeup logic; a state transfer logic; an undefined handling logic; and an interrupt history storage Clause 7. The apparatus of clause I further comprising an accelerator coupled to the logic, the accelerator to perform a task and to send the interrupt to the logic upon completion of the task.
Clause 8. The apparatus of clause 7 wherein the second core is to handle the interrupt when the interrupt comprises a request for a data movement operation.
Clause 9. The apparatus of clause 7 wherein the second core is to cause a wakeup signal to be sent to the first core to enable the first core to handle the interrupt when the interrupt comprises a request for a vector operation.
Clause 10. The apparatus of clause 9 wherein the logic is to receive an undefined instruction fault from the second core, determine that the second core cannot handle the vector operation, obtain an execution state from the second core, merge the execution state with at least a portion of an execution state of the first core stored in a temporary storage area, and cause the merged execution state to be sent to the first core.
Clause 11. The apparatus of clause 1 wherein the logic is to analyze a plurality of interrupts and if a majority of the plurality of interrupts are to be handled by the first core, the logic is to not wake the second core responsive to the interrupt and instead wake the first core Clause 12. A method comprising: receiving an interrupt from an accelerator in a logic of a processor including a first small core, a first large core and the accelerator, when the first small core and the first large core are in a low power state; sending a resume signal directly to the first small core responsive to the interrupt and providing a subset of an execution state of the first large core to the first small core; and determining whether the first small core can handle a request associated with the interrupt, and if so performing an operation corresponding to the request in the first small core.
Clause 13. The method of clause 12 further comprising if the first small core cannot handle the request, obtaining the execution state subset from the first small core, merging the execution state subset with a stored execution state of the first large core, and sending a wakcup signal and the merged execution state to the large core.
Clause 14. The method of clause 13 further comprising thereafter performing the operation corresponding to the request in the first large core.
Clause 15. The method of clause 12 further comprising receiving the interrupt with a hint to indicate whether the interrupt should be directed to the first small core or the first large core.
Clause 16. The method of clause 12 further comprising accessing an entry of a table based on a type of the interrupt and determining vv-hethcr to send thc resume sigial directly to the first small core or the first large core based on the entry.
Clause 17. A system comprising: a multicore processor including a first plurality of cores and a second plurality of cores, the second plurality of cores having lower thermal design power than the first plurality of cores, an accelerator, and a power control unit (PCU), wherein the PCU is to receive an interrupt from the accelerator when the first plurality of cores and the second plurality of cores are in a low power state, send a resume signal directly to a first of the second plurality of cores responsive to the interrupt and provide a subset of an execution state of a first of the first plurality of cores to the first of the second plurality of cores; and a dynamic random access memory (DRAM) coupled to the multicore processor.
Clause 18. The system of clause 17wherein the first plurality of cores are of a heterogeneous design from the second plurality of cores.
26 Clause 19. The system of clause I 7wherem the second plurality of cores are transparent to an operating system (OS).
Clause 20. The system of clause 17wherein the PCU is to access an entry of a table using the interrupt to determine whether to send the resume signal to the first one of the first or second plurality of cores, wherein the PCU is to send the resume signal to the first one of the first plurality of cores when the entry indicates that one of the second plurality of cores incurred an undefined fault responsive to a previous interrupt of the same type as the interrupt.

Claims (20)

  1. CLAIMSA mobile device comprising: an apparatus; a dynamic random access memory (DRAM) coupled to the apparatus; a data storage, wherein the apparatus comprises: a cryptographic accelerator; a video accelerator; a memory controller; and a processor comprising: a first plurality of cores; a second plurality of cores, wherein the second plurality of cores heterogeneous to and having a lower power consumption than the first plurality of cores; an interconnect to couple the first plurality of cores and the second plurality of cores and a shared cache memory coupled to at least the first plurality of cores; and a logic to cause a core of the second plurality of cores to execute an operation, wherein based at least in part on a performance level of the core of the second plurality of cores, the logic is to cause an execution state of the core of the second plurality of cores to be transferred to a core of the first plurality of cores to enable the core of the first plurality of cores to execute the operation.
  2. 2. The mobile device of claim 1, wherein the logic is to cause the core of the second plurality of cores and not the core of the first plurality of cores to be woken in response to an interrupt when the core of the first plurality of cores and the core of the second plurality of cores are in a low power state.
  3. 3. The mobile device of claim 2, wherein the logic is to cause the core of the first plurality of cores and not the core of the second plurality of cores to be woken in response to the interrupt when an entry of a table indicates that the core of the second plurality of cores incurred an undefined fault in response to a previous interrupt of the same type as the interrupt.
  4. 4. The mobile device of claim 2, wherein the logic is to provide a subset of an execution state of the core of the first plurality of cores to the core of the second plurality of cores in response to the interrupt.
  5. 5. The mobile device of claim 4, wherein in response to a determination that the core of the second plurality of cores cannot handle at least one requested operation, the logic is to obtain the subset of the execution state from the core of the second plurality of cores and to merge the execution state subset with a remainder of the execution state of the core of the first plurality of cores stored in a temporary storage area.
  6. 6. The mobile device of claim 2, wherein the video accelerator is to perform a task and to send the interrupt to the logic upon completion of the task.
  7. 7. The mobile device of claim 2, wherein the logic is to analyze a plurality of interrupts and if a majority of the plurality of interrupts are to be handled by the core of the first plurality of cores, the logic is to not wake the core of the second plurality of cores in response to the interrupt and instead wake the core of the first plurality of cores.
  8. 8. The mobile device of claim 1, wherein the processor comprises a multicore processor, the logic comprising: a wakeup logic; a state transfer logic; an undefined handling logic; and an interrupt history storage.
  9. 9. The mobile device of claim 1, further comprising an interrupt controller to receive a plurality of interrupts and direct the plurality of interrupts to one or more cores of at least one of the first plurality of cores and the second plurality of cores.
  10. 10. The mobile device of claim 1, therein the mobile device comprises a smartphone.
  11. I I. The mobile device of claim I. wherein the mobile device comprises a tablet computer.
  12. 12. The mobile device of claim 1, further comprising an audio device.
  13. 13. The mobile device of claim I. wherein the core of the first plurality of cores further comprises at least one cache memory.
  14. 14. A method comprising: causing a core of a second plurality of cores of a processor of a mobile device to execute an operation, based at least in part on a performance level of the core of the second plurality of cores, the processor comprising a first plurality of cores, the second plurality of cores heterogeneous to and having a lower power consumption than the first plurality of cores, an interconnect to couple the first plurality of cores and the second plurality of cores and a shared cache memory coupled to at least the first plurality of cores; and causing an execution state of the core of die second plurality of cores to be transferred to a core of the first plurality of cores to enable the core of the first plurality of cores to execute the operation.
  15. 15, The method of claim 14, further comprising causing the core of the second plurality of cores and not the core of the first plurality of cores to be woken in response to an interrupt when the core of the first plurality of cores and the core of the second plurality of cores are in a low power state.
  16. 16, The method of claim 15, further comprising causing the core of the first plurality of cores and not the core of the second plurality of cores to be woken in response to the interrupt when an entry of a table indicates that the core of the second plurality of cores incurred an undefined fault in response to a previous interrupt of the same type as the interrupt.
  17. 17. The method of claim 15, further comprising providing a subset of an execution state of the core of the first plurality of cores to the core of the second plurality of cores in response to the interrupt.
  18. 18. At least one computer readable storage medium comprising instructions that when executed enable a system to: cause a core of a second plurality of cores of a processor of a mobile device to execute an operation, based at least in part on a performance level of the core of the second plurality of cores, the processor comprising a first plurality of cores, the second plurality of cores heterogeneous to and having a lower power consumption than the first plurality of cores, an interconnect to couple the first plurality of cores and the second plurality of cores and a shared cache memory coupled to at least the first plurality of cores and cause an execution state of the core of the second plurality of cores to be transferred to a core of the first plurality of cores to enable the core of the first plurality of cores to execute the operation.
  19. 19. The at least one computer readable medium of claim 18, further comprising instructions that when executed enable the system to cause the core of the second plurality of cores and not the core of the first plurality of cores Lobe woken in response to an interrupt when the core of the first plurality of cores and the core of the second plurality of cores are in a low power state.
  20. 20. The at least one computer readable medium of claim 19, further comprising instructions that when executed enable the system to cause the core of the first plurality of cores and not the core of the second plurality of cores to be woken in response to the interrupt when an entry of a table indicates that the core of the second plurality of cores incurred an undefined fault in response to a previous interrupt of the same type as the interrupt.
GB1609345.2A 2011-09-06 2011-09-06 Power efficient processor architecture Active GB2536825B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1609345.2A GB2536825B (en) 2011-09-06 2011-09-06 Power efficient processor architecture

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1402807.0A GB2507696B (en) 2011-09-06 2011-09-06 Power efficient processor architecture
GB1609345.2A GB2536825B (en) 2011-09-06 2011-09-06 Power efficient processor architecture

Publications (3)

Publication Number Publication Date
GB201609345D0 GB201609345D0 (en) 2016-07-13
GB2536825A true GB2536825A (en) 2016-09-28
GB2536825B GB2536825B (en) 2017-08-16

Family

ID=56410569

Family Applications (3)

Application Number Title Priority Date Filing Date
GB1609345.2A Active GB2536825B (en) 2011-09-06 2011-09-06 Power efficient processor architecture
GB1612629.4A Active GB2537300B (en) 2011-09-06 2011-09-06 Power efficient processor architecture
GB1609270.2A Active GB2536824B (en) 2011-09-06 2011-09-06 Power efficient processor architecture

Family Applications After (2)

Application Number Title Priority Date Filing Date
GB1612629.4A Active GB2537300B (en) 2011-09-06 2011-09-06 Power efficient processor architecture
GB1609270.2A Active GB2536824B (en) 2011-09-06 2011-09-06 Power efficient processor architecture

Country Status (1)

Country Link
GB (3) GB2536825B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114356445B (en) * 2021-12-28 2023-09-29 山东华芯半导体有限公司 Multi-core chip starting method based on large and small core architecture

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110213934A1 (en) * 2010-03-01 2011-09-01 Arm Limited Data processing apparatus and method for switching a workload between first and second processing circuitry
US20110213947A1 (en) * 2008-06-11 2011-09-01 John George Mathieson System and Method for Power Optimization

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8364857B2 (en) * 2009-08-31 2013-01-29 Qualcomm Incorporated Wireless modem with CPU and auxiliary processor that shifts control between processors when in low power state while maintaining communication link to wireless network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110213947A1 (en) * 2008-06-11 2011-09-01 John George Mathieson System and Method for Power Optimization
US20110213934A1 (en) * 2010-03-01 2011-09-01 Arm Limited Data processing apparatus and method for switching a workload between first and second processing circuitry

Also Published As

Publication number Publication date
GB2536824B (en) 2017-06-14
GB2536825B (en) 2017-08-16
GB201609270D0 (en) 2016-07-13
GB2537300A (en) 2016-10-12
GB201612629D0 (en) 2016-09-07
GB201609345D0 (en) 2016-07-13
GB2537300B (en) 2017-10-04
GB2536824A (en) 2016-09-28

Similar Documents

Publication Publication Date Title
US10664039B2 (en) Power efficient processor architecture
KR101476568B1 (en) Providing per core voltage and frequency control
US7490254B2 (en) Increasing workload performance of one or more cores on multiple core processors
US9098274B2 (en) Methods and apparatuses to improve turbo performance for events handling
US6631474B1 (en) System to coordinate switching between first and second processors and to coordinate cache coherency between first and second processors during switching
GB2536825A (en) Power efficient processor architecture
JP6409218B2 (en) Power efficient processor architecture
JP2017021811A (en) Power efficient processor architecture
JP2016212907A (en) Excellent power efficient processor architecture