US20140215252A1 - Low Power Control for Multiple Coherent Masters - Google Patents

Low Power Control for Multiple Coherent Masters Download PDF

Info

Publication number
US20140215252A1
US20140215252A1 US14/026,885 US201314026885A US2014215252A1 US 20140215252 A1 US20140215252 A1 US 20140215252A1 US 201314026885 A US201314026885 A US 201314026885A US 2014215252 A1 US2014215252 A1 US 2014215252A1
Authority
US
United States
Prior art keywords
subsystem
power
power manager
ccm
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/026,885
Inventor
Mark Fullerton
Ronak Patel
Timothy Chen
Lei Yu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avago Technologies International Sales Pte Ltd
Original Assignee
Broadcom Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Broadcom Corp filed Critical Broadcom Corp
Priority to US14/026,885 priority Critical patent/US20140215252A1/en
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PATEL, RONAK, YU, LEI, CHEN, TIMOTHY, FULLERTON, MARK
Publication of US20140215252A1 publication Critical patent/US20140215252A1/en
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENT reassignment BANK OF AMERICA, N.A., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: BROADCOM CORPORATION
Assigned to AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. reassignment AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BROADCOM CORPORATION
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS Assignors: BANK OF AMERICA, N.A., AS COLLATERAL AGENT
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/3287Power saving characterised by the action undertaken by switching off individual functional units in the computer system
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03KPULSE TECHNIQUE
    • H03K19/00Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits
    • H03K19/02Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components
    • H03K19/08Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using semiconductor devices
    • H03K19/094Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using semiconductor devices using field-effect transistors
    • H03K19/096Synchronous circuits, i.e. using clock signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/04Generating or distributing clock signals or signals derived directly therefrom
    • G06F1/10Distribution of clock signals, e.g. skew
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01FMAGNETS; INDUCTANCES; TRANSFORMERS; SELECTION OF MATERIALS FOR THEIR MAGNETIC PROPERTIES
    • H01F38/00Adaptations of transformers or inductances for specific applications or functions
    • H01F38/14Inductive couplings
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J50/00Circuit arrangements or systems for wireless supply or distribution of electric power
    • H02J50/05Circuit arrangements or systems for wireless supply or distribution of electric power using capacitive coupling
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J50/00Circuit arrangements or systems for wireless supply or distribution of electric power
    • H02J50/10Circuit arrangements or systems for wireless supply or distribution of electric power using inductive coupling

Definitions

  • This invention relates to power efficiency and more specifically to power management of a system having a cache memory.
  • PMU power management unit
  • the PMU can receive information regarding which system components will need power to perform tasks and which system components can be powered down without negatively impacting system performance. Based or this information, the PMU can efficiently allocate power to the system so that the system can perform necessary tasks while efficiently using available power.
  • Power management schemes used by some conventional PMUs have significant disadvantages. For example, some conventional PMUs power down and power on entire subsystems as power needs of the overall system change. Powering up an entire subsystem can be inefficient, for example, when only a single subsystem component needs power to perform a task.
  • Embodiments of the present disclosure provide systems and methods for more efficiently managing power among components of a system.
  • FIG. 1A is a block diagram of a system for managing power in accordance with an embodiment of the present disclosure.
  • FIG. 1B is a block diagram of a system for managing power including a cache coherency module (CCM) in accordance with an embodiment of the present disclosure.
  • CCM cache coherency module
  • FIG. 1C is a block diagram illustrating a more detailed diagram of a system for managing power in accordance with an embodiment of the present disclosure.
  • FIG. 2 is a flowchart of a method for powering up components of a subsystem in accordance with an embodiment of the present disclosure.
  • FIG. 3 is a flowchart of a method for powering down components of a subsystem in accordance with an embodiment of the present disclosure.
  • FIG. 4 is a flowchart of a method for processing a request to power down a component of a subsystem in accordance with an embodiment of the present disclosure.
  • FIG. 5 is a flowchart of a method for processing a request to access stored data in accordance with an embodiment of the present disclosure.
  • FIG. 6 is a block diagram illustrating an example computer system that can be used to implement embodiments of the present disclosure.
  • references in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • module shall be understood to include one of software, or firmware, or hardware (such as circuits, microchips, processors, or devices, or any combination thereof), or any combination thereof.
  • each module can include one, or more than one, component within an actual device, and each component that forms a part of the described module can function either cooperatively or independently of any other component forming a part of the module.
  • multiple modules described herein can represent a single component within an actual device. Further, components within a module can be in a single device or distributed among multiple devices in a wired or wireless manner.
  • Embodiments of the present disclosure provide systems and methods to efficiently manage power among system components.
  • a power manager receives information from subsystems and determines which subsystem components will require power to perform upcoming tasks. Based on this received information, the power manager can power on and power down individual subsystem components. By powering up individual subsystem components instead of powering up an entire subsystem, the power manager can conserve power while still supplying enough power so that the upcoming tasks can be performed.
  • Embodiments of the present invention provide systems and methods for power-efficient use of cache memory (“cache”) across multiple subsystems.
  • cache cache memory
  • systems and methods according to embodiments of the present disclosure enable a cache of a subsystem to be powered on without requiring a power-up of every component of the subsystem.
  • disclosed systems and methods enable a first subsystem to snoop into a cache of a second subsystem without requiring a full power-up of the second subsystem.
  • FIG. 1A is a block diagram of a system for managing power in accordance with an embodiment of the present disclosure.
  • FIG. 1A includes a power manager 102 coupled to two subsystems 108 .
  • Power manager 102 can be implemented using hardware, software, or a combination of hardware and software.
  • power manager 102 includes a dedicated processor (not shown) or hardware logic to process instructions for determining when to supply power to subsystems 108 .
  • power manager 102 accesses another processor (e.g., a host processor) to process instructions for determining when to supply power to subsystems 108 .
  • another processor e.g., a host processor
  • subsystems 108 communicate with power manager 102 using control signals 106 .
  • each subsystem 108 a and 108 c includes a plurality of subsystem components.
  • these subsystem components comprise caches 115 and processor cores (“cores”) 118 .
  • Caches 115 a and 115 b can be used to temporarily store data for subsystems 115 a and 115 b .
  • cores 118 are individual cores of a multi-core processor. In another embodiment, each of cores 118 is a separate processor.
  • subsystems can have differing number of cores.
  • subsystem 108 a includes four cores (cores 118 a , 118 b , 118 c , and 118 d ), and subsystem 2 includes two cores ( 118 e and 118 f ).
  • subsystem 108 a has more cores than subsystem 108 c
  • subsystem 108 a is more powerful than subsystem 108 c but is also more power hungry than subsystem 108 c .
  • While only cores 118 and cache 115 are shown as components of subsystems 108 in FIG. 1A , it should be understood that subsystems can have other components in accordance with embodiments of the present disclosure.
  • Power manager 102 manages power supplied to subsystems 108 based on received information about power needs of the system of FIG. 1A .
  • power manager 102 can receive a notification whenever a system component (e.g., one of cores 118 or cache 115 ) will be needed to perform a task.
  • power manager 102 can receive information regarding pending interrupts (such as hardware wakeup events) for cores 118 . If, for example, power manager 102 determines that there is a pending interrupt for core 118 a , power manager 102 can initiate a power-up of core 118 a using control signal 106 b .
  • power manage 102 can receive an instruction from a host processor (not shown) to power on one or more of cores 118 or one or more of caches 115 .
  • subsystems 108 can send a power-up request to power manager 102 .
  • subsystem 108 a can determine that one of its system components will be needed to perform a task, and subsystem 108 a can send a request to power manager 102 (e.g., via sending control signal 106 a to power manager 102 using a powered-up core) for the system component to be powered on.
  • core 118 a of subsystem 108 a can receive an interrupt input into core 118 a . After receiving the interrupt, subsystem 108 a can use a powered-up core to send a request via control signal 106 a to power manager 102 to power on core 118 a.
  • Power manager 102 can also initiate a powering down of subsystem components to conserve power when subsystem components are not needed to perform tasks. For example, in an embodiment, if core 118 a is finished performing a task, core 118 a can send a message to power manager 102 (e.g., via control signal 106 a ) informing power manager 102 that core 118 a has finished performing a task. In an embodiment, this message can include a request for core 118 a to be shut down. It should be understood that, in an embodiment, power manager 102 can be informed that a subsystem component has finished performing a task from a source other than control signals 106 .
  • power manager 102 can then determine, based on available information, whether the subsystem component should be shut down. For example, after receiving a shutdown request from core 118 , power manager can determine whether core 118 a will be needed to perform additional tasks in the near future or whether core 118 a can be shut down to conserve power without negatively impacting system performance. For example, in an embodiment, power manager 102 can determine whether it is aware of any pending tasks that are scheduled to be processed using core 118 a . If no such tasks exist, power manager 102 can initiate a shutdown of core 118 a via control signal 106 b . Subsystem 108 a can receive control signal 106 b and can initiate the shutdown of core 118 a.
  • power manager 102 can determine that a subsystem component should be shut down even if a task is pending for the subsystem component. For example, in an embodiment, power manager 102 can determine that a core (e.g., core 118 a ) can be powered down and powered back up before the task is scheduled to be processed to conserve power. Alternatively, in an embodiment, power manager 102 can reassign the task to a different subsystem component (e.g., to another powered-up core, such as core 118 b ).
  • a core e.g., core 118 a
  • power manager 102 can reassign the task to a different subsystem component (e.g., to another powered-up core, such as core 118 b ).
  • caches 115 can be used to temporarily store data for subsystems 115 a and 115 b .
  • Subsystems 108 can access data stored in caches 115 faster than data stored in an external memory (not shown).
  • one subsystem can request to access data stored in a cache of another subsystem.
  • Such requests can be referred to as “cache snooping.”
  • a component of subsystem 108 c may request to snoop into cache 115 a of subsystem 108 a to access data because accessing data from cache 115 a is faster than accessing data from an external memory.
  • accessing data from caches 115 causes less latency than accessing data from an external memory.
  • core 118 e can send a request (e.g., via control signal 106 e ) to access data stored in cache 115 a .
  • Power manager 102 can then determine whether to power on cache 115 a.
  • power manager 102 can initiate a power on of cache 115 a without powering up additional components of subsystem 108 a (e.g., without powering up one of cores 118 a , 118 b , 118 c , or 118 d ) to enable subsystem 108 c to snoop into cache 115 a .
  • the system of FIG. 1A can conserve power.
  • subsystem 108 c can notify power manager 102 that it has finished accessing cache 115 a and that cache 115 a can be powered down.
  • core 118 e can send a request (e.g., via control line 106 e ) to power down cache 115 a . If power manager 102 determines that cache 115 a is not needed to perform additional tasks, power manager can determine that cache 115 a can initiate powering down cache 115 a via control signal 106 b.
  • embodiments of the present disclosure advantageously enable caches to remain powered even when other subsystem components have been shut down. For example, if cores 118 a , 118 b , and 118 c , and 118 d have been powered down, power manager 102 can still supply cache 115 a with power, enabling sub-system 108 c to snoop into cache 115 a to access data while core 118 e or core 118 f is being used to perform a task.
  • Systems and methods according to embodiments of the present disclosure can be configured to ensure cache coherency among subsystems. For example, if copies of the same data are stored in both caches 115 a and 115 b , systems and methods according to embodiments of the present disclosure can ensure that changes to data are uniformly made to all copies of the data stored in caches.
  • FIG. 1B is a block diagram of a system for managing power including a cache coherency module (CCM) in accordance with an embodiment of the present disclosure.
  • CCM subsystem 108 b includes CCM 114 , which ensures cache coherency among caches 115 .
  • CCM subsystem 108 b is coupled to subsystems 108 a and 108 c and also to power manager 102 .
  • CCM subsystem 108 b can communicate with power manager 102 using control signals 106 c and 106 d.
  • CCM 114 arbitrates requests to access data stored in caches 115 .
  • CCM 114 includes a dedicated processor (not shown) or hardware logic to process instructions for arbitrating requests to access data stored in caches 115 .
  • CCM 114 is notified when data is written to or read from caches 115 , and CCM 114 records (or has access to) information regarding what data is stored in caches coupled to CCM subsystem 108 b (e.g., caches 115 ).
  • CCM subsystem 108 are not required to know which data is stored in which cache before requesting access to stored data.
  • subsystems 115 can send a request to access data to CCM 114 , and CCM 114 can determine whether the data is stored in one of caches 115 or whether it should access the data from external memory.
  • subsystems 108 can send a request to power manager 102 to power on CCM 114 , and then subsystems 108 can send a request to access data to CCM 114 .
  • a component of subsystem 108 c sends a request to CCM 114 to access data.
  • CCM 114 receives the request, and determines whether data is stored in a cache (e.g., in cache 115 a or 115 b ). If the data is not stored in a cache, CCM 114 initiates a retrieval of the data from external memory. If the data is stored in a cache, CCM initiates a retrieval of the information from the cache (e.g., from cache 115 a ). If the cache storing the data is not supplied with power, CCM 114 can send a request to power manager 102 to power on the cache so that the data can be read from the cache.
  • CCM 114 is notified when data is written to a cache (e.g., to cache 115 a or 115 b ). For example, if core 118 e wants to write data to cache 115 b , core 118 e first notifies CCM 114 that it is planning to write data to cache 115 b . In an embodiment, CCM 114 notifies other subsystems accessing the data that the data is going to be updated, and CCM 114 can also update copies of the data stored in other caches. Additionally, in an embodiment, CCM 114 can be required to approve the request to write data to a cache before the data is written.
  • CCM 114 may determine that a task using the data should be allowed to finish before the data is updated.
  • CCM 114 can be configured to notify a process in progress that it is using stale data that is being updated. The process may then complete using the updated data (or the process may restart using the updated data).
  • FIG. 1C is a block diagram illustrating a more detailed diagram of a system for managing power in accordance with an embodiment of the present disclosure.
  • power manager 102 can include sub-power managers for subsystems coupled to power manager 102 .
  • power manager 102 includes sub-power managers 104 a , 104 b , and 104 c for subsystems 108 a , 108 b , and 108 c , respectively.
  • sub-power managers 104 can receive control signals 106 a , 106 c , and 106 e from subsystems 108 and can send control signals 106 b , 106 d , and 106 f to subsystems 108 .
  • subsystems 108 include switching regulators 110 , phase-locked loops (PLLs) 112 , and switches 116 .
  • subsystem 108 a includes an adjustable switching regulator (ASR) 110 a coupled to PLL 112 a and switches 116 a , 116 b , 116 c , and 116 d .
  • PLL 112 a provides a clock signal for subsystem 108 a .
  • ASR 110 a supplies power to cache 115 a and to cores 118 a , 118 b , 118 c , and 118 d via switches 116 a , 116 b , 116 c , and 116 d .
  • each of switches 116 a , 116 b , 116 c , and 116 d is coupled to a respective core 118 a , 118 b , 118 c , and 118 d.
  • sub-power manager 104 a When sub-power manager 104 a determines that a core (e.g., core 118 a ) should be powered down, sub-power manager 104 a can send a control signal (e.g., control signal 106 b ) to the subsystem (e.g., subsystem 108 a ).
  • the control signal can instruct the subsystem and/or a switching regulator (e.g., ASR 110 a ) to toggle a switch coupled to the core (e.g., ASR 110 a can toggle switch 116 a coupled to core 118 a ) to cut off power from the core.
  • the sub-power manager determines that an entire subsystem should be powered down, the sub-power manager can stop supplying power to the switching regulator of the subsystem. For example, sub-power manager 104 a can stop supplying power to ASR 110 a to cut off power from subsystem 108 a.
  • sub-power manager 104 a When sub-power manager 104 a determines that a core (e.g., core 118 a ) should be powered on, sub-power manager 104 a can send a control signal (e.g., control signal 106 b ) to the subsystem (e.g., subsystem 108 a ).
  • the control signal can instruct the subsystem and/or a switching regulator (e.g., ASR 110 a ) to toggle a switch coupled to the core (e.g., ASR 110 a can toggle switch 116 a coupled to core 118 a so that switch 116 a connects ASR 110 a to core 118 a ) to supply power to the core.
  • a switching regulator e.g., ASR 110 a
  • sub-power manager 104 a can supply power to the switching regulator of the subsystem. For example, sub-power manager 104 a can supply power to ASR 110 a to supply power to subsystem 108 a.
  • a cache of a subsystem is powered down when a subsystem is powered down, and a cache of a subsystem is powered on when the subsystem powers on.
  • cache 115 a is powered down when subsystem 108 a is powered down, and cache 115 a is powered on when subsystem 108 a is powered on.
  • caches can be powered down and powered on without requiring a power down or power on of the entire subsystem.
  • cache 115 a can be coupled to a dedicated switch (not shown), and ASR 110 a can toggle this dedicated switch to cut off power from cache 115 a or supply power to cache 115 a without requiring entire subsystem 115 a to be powered down or powered on.
  • CCM subsystem 108 b includes a cache switching regulator (CSR) 110 b coupled to a switch 116 e .
  • CSR cache switching regulator
  • CRS 110 b toggles switch 116 e on or off to supply power to CCM 114
  • PLL 112 b supplies a clock signal for CCM 114 .
  • sub-power manager 104 b determines that CCM subsystem 108 b should be powered down, sub-power manager 104 b can send a control signal (e.g., control signal 106 c ) to CCM subsystem 108 b .
  • the control signal instructs CCM subsystem 108 b and/or CSR 110 b to toggle switch 116 e coupled to CCM 114 to cut off power from CCM 114 .
  • sub-power manager 104 b determines that CCM subsystem 108 b should be powered on, sub-power manager 104 b can send a control signal (e.g., control signal 106 c ) to CCM subsystem 108 b instructing CSR 110 b to toggle switch 116 e coupled to CCM 114 to supply power to CCM 114 .
  • a control signal e.g., control signal 106 c
  • subsystem components and/or subsystems can send a message to power manager 102 and/or respective sub-power managers 104 when the subsystem components and/or subsystems have finished performing tasks.
  • These messages can optionally include requests to power down the subsystem components and/or subsystems.
  • cores 118 a , 118 b , 118 c , and 118 d can send a message to sub-power manager 104 a when cores 118 a , 118 b , 118 c , and 118 d have finished performing tasks.
  • sub-power manager 104 a can initiate a powering down of cores 118 a , 118 b , 118 c , and/or 118 d by sending a control signal (e.g., control signal 106 b ) to ASR 110 a to instruct ASR 110 a to toggle switches 116 a , 116 b , 116 c , and/or 116 d to cut off power to cores 118 a , 118 b , 118 c , and/or 118 d .
  • a control signal e.g., control signal 106 b
  • sub-power manager 104 a can determine whether any other system components need to access any of cores 118 a , 118 b , 118 c , and/or 118 d before powering down any of cores 118 a , 118 b , 118 c , and/or 118 d.
  • subsystem 108 a can send a message to power manager 102 when subsystem 108 a has finished performing tasks. For example, if cache 115 a is no longer being used, subsystem 108 a can send a message to sub-power manager 104 a requesting that subsystem 108 a be powered down. If, after receiving this message, sub-power manager 104 a determines that subsystem 108 a should be powered down, sub-power manager 104 a can initiate a powering down of subsystem 108 a by sending a control signal (e.g., control signal 106 b ) to ASR 110 a to cut off power from ASR 110 a to power down subsystem 108 a . In an embodiment, sub-power manager 104 a can determine whether any other system components need to access subsystem 108 a before powering down subsystem 108 a.
  • a control signal e.g., control signal 106 b
  • subsystems can also send a message to power manager 102 informing power manager 102 that they have finished performing tasks using components of other subsystems. For example, if subsystem 108 a finished accessing cache 115 b of subsystem 108 c , subsystem 108 a can send a message to power manager 102 informing power manager 102 that it is no longer accessing cache 115 b . In an embodiment, subsystem 108 a can send this message to sub-power manager 104 a , and sub-power manager 104 a can forward the message to sub-power manager 104 c . However, it should be understood that sub-power manager 104 a or power manager 102 can process this message in accordance with embodiments of the present disclosure.
  • power manager 102 can initiate a powering down of subsystem 108 c by sending a control signal (e.g., control signal 106 f ) to ASR 110 c to cut off power from ASR 110 c to power down subsystem 108 c (and thus power down cache 115 b ).
  • control signal e.g., control signal 106 f
  • power manager 102 can determine whether any other system components need to access cache 115 b and/or other components of subsystem 108 c before powering down subsystem 108 c.
  • CCM subsystem 108 b can also send a message to power manager 102 when CCM subsystem 108 b has finished performing tasks.
  • CCM subsystem 108 b can send a message to sub-power manager 104 b when CCM subsystem is no longer being used to arbitrate access to caches 115 .
  • sub-power manager 104 b can initiate a powering down of CCM subsystem 108 b by sending a control signal (e.g., control signal 106 d ) to CSR 110 b to cut off power from CSR 110 b to power down CCM subsystem 108 b (and thus power down CCM 114 ).
  • a control signal e.g., control signal 106 d
  • sub-power manager 104 b can determine whether any other system components need to access CCM 114 and/or other components of CCM subsystem 108 b before powering down CCM subsystem 108 b.
  • Systems and methods according to embodiments of the present disclosure enable subsystems and/or subsystem components to be powered on in layers so that unused system components are not supplied with power.
  • This layering concept provides an efficient, flexible approach to supplying power to various subsystem components. For example, in an embodiment, power manager 102 won't attempt to power down an entire subsystem while a subsystem component is still being used to perform a task. Instead, power manager 102 adopts a layered approach by first attempting to power down unused subsystem components. Then, once all subsystem components have finished performing tasks, power manager 102 determines whether to power down the subsystem. Finally, if all subsystems have finished performing tasks, power manager 102 determines whether to power down CCM subsystem 108 b (and thus power down CCM 114 ).
  • power manager 102 does not power down ASR 110 a (which, in an embodiment, supplies power to entire subsystem 108 a including cache 115 a ) until all of cores 116 a , 116 b , 116 c , and 116 d have been powered down (e.g., via switches 116 a , 116 b , 116 c , and 116 d , respectively).
  • power manager 102 does not power down CSR 110 b (which, in an embodiment, supplies power to entire subsystem 108 b including CCM 114 ) until both subsystem 108 a and 108 b have been powered down (e.g., via ASR 110 a and ASR 110 c , respectively).
  • this layering concept can also extend to powering up subsystems and subsystem components.
  • power manager 102 does not power on subsystem 108 a or subsystem 108 b until CCM subsystem has been powered on (e.g., by supplying power to CSR 110 b ).
  • power manager 102 does not power on any of cores 118 a , 118 b , 118 c , or 118 d until subsystem 108 a has been powered on (e.g., by supplying power to ASR 110 a ).
  • caches in accordance with embodiments of the present disclosure can be partitioned into multiple portions, and each portion of a cache can be powered down when not used to conserve power and powered up when needed.
  • power manager 102 can send a message instructing a portion of cache 115 a to be powered down when this portion of cache 115 a is not needed. While a portion of cache 115 a is powered down, other portions of cache 115 a can still be powered on and accessed.
  • power manager 102 determines that a powered down portion of cache 115 a needs to be used to perform a task, power manager 102 can send a message instructing the powered down portion of cache 115 a to be powered on again.
  • cache 115 a can be split into a first portion and a second portion. If for example, core 118 e has finished accessing the first portion of cache 115 a , core 118 e can send a message to power manager 102 instructing power manager 102 that it has finished using the first portion of cache 115 a and that the first portion of cache 115 a can be powered down. If power manager 102 determines that no other subsystems need to access the first portion of cache 115 a , power 102 can send a message to ASR 110 a instructing ASR 110 a to cut off power to the first portion of cache 115 a .
  • the components of the system of FIG. 1A , the components of the system of FIG. 1B and/or the components of the system of FIG. 1C can be implemented on a single integrated circuit (IC). In another embodiment, some components of the systems of FIG. 1A , 1 B and/or 1 C are implemented using multiple ICs. For example, in an embodiment, power manager 102 and subsystems 108 are implemented on different ICs. Additionally, it should be understood that the components of the systems of FIGS. 1A , 1 B, and/or 1 C can be implemented using hardware, software, or a combination of hardware and software in accordance with embodiments of the present disclosure.
  • FIG. 2 is a flowchart of a method for powering up components of a subsystem in accordance with an embodiment of the present disclosure.
  • the CCM is powered on first.
  • sub-power manager 104 b can send a control signal (e.g., control signal 106 d ) to CCM subsystem 108 b if power manager 102 determines that CCM subsystem 108 b is powered down.
  • a subsystem is powered on. For example, once power manager 102 determines that CCM subsystem 108 b has power, power manager can then power on a subsystem (e.g., subsystem 108 a or 108 b ) so that the subsystem can be accessed.
  • a subsystem e.g., subsystem 108 a or 108 b
  • subsystem 108 a if subsystem 108 a is powered on, cache 115 a can be accessed.
  • a subsystem component is powered on. For example, in an embodiment, if sub-power manager 104 a determines that subsystem 108 a has power, sub-power manager 104 a can send control signal 106 b to ASR 110 a to instruct ASR 110 a to toggle switch 116 a to supply power to core 118 a so that core 118 a can be used to perform a task.
  • FIG. 3 is a flowchart of a method for powering down components of a system in accordance with an embodiment of the present disclosure.
  • a subsystem component is powered down.
  • sub-power manager 104 a can send control signal 106 b to ASR 110 a to instruct ASR 110 a to toggle switch 116 a to power down core 118 a .
  • a subsystem is powered down.
  • sub-power manager- 104 a can determine to power down subsystem 108 a when sub-power manager receives a request to power down subsystem 108 a .
  • cache 115 a is also powered down when subsystem 108 a is powered down.
  • CCM 114 is powered down.
  • sub-power manager- 104 a can determine to power down CCM subsystem 108 b (and thus CCM 114 ) when sub-power manager receives a request to power down CCM subsystem 108 b.
  • FIG. 4 is a flowchart of a method for processing a request to power down a component of a system in accordance with an embodiment of the present disclosure.
  • a request to power down a system component is received.
  • sub-power manager 104 a can receive a request to power down core 118 a .
  • a determination is made regarding whether other system components need to access the system component.
  • sub-power manager 104 a can determine whether other system components need to access core 118 a (e.g., by determining whether an instruction is pending for core 118 a ).
  • step 404 the method proceeds to step 404 , and the system component is left on.
  • sub-power manager 104 a may determine to leave core 118 a powered on if sub-power manager 104 a determines that other system components need to access core 118 a .
  • the power manager e.g., power manager 102
  • the method proceeds to step 406 , and the system component is powered down.
  • sub-power manager 104 a may determine to power down core 118 a if sub-power manager 104 a determines that other system components do not need to access core 118 a.
  • sub-power manager 104 a determines whether other subsystem components need to access cache 115 a and/or subsystem 108 a in step 402 . If sub-power manager 104 a determines that other system components need to access cache 115 a and/or subsystem 108 a , the method proceeds to step 404 , and cache 115 a and/or subsystem 108 a is left on.
  • sub-power manager 104 a determines that other system components do not need to access cache 115 a and/or subsystem 108 a , the method proceeds to step 406 , and cache 115 a and/or subsystem 108 a are powered down (e.g., by powering down ASR 110 a ).
  • FIG. 5 is a flowchart of a method for processing a request to access stored data in accordance with an embodiment of the present disclosure.
  • a request to access data is received.
  • CCM 114 can receive a request from subsystem 108 a to access data.
  • CCM 114 can determine whether the data is stored in cache 115 a or cache 115 b . If the CCM (e.g., CCM 114 ) determines that the data is not stored in cache, the method proceeds to step 506 , and the data is accessed from external memory. For example, CCM 114 can send a request to external memory to access the data.
  • CCM e.g., CCM 114
  • the method proceeds to step 504 , and a determination is made regarding whether the cache is powered on.
  • CCM 114 can determine that the data is stored in cache 115 b and can then determine whether cache 115 b is powered on.
  • the CCM can send a request to power manager 102 to determine whether the cache is powered on.
  • CCM 114 sends a request to power manager 102 via control signal 106 c to determine whether cache 115 b is powered on.
  • power manager 102 can respond to the CCM via control signal 106 d . If the CCM (e.g., CCM 114 ) determines that the cache is powered on, the method proceeds to step 510 , and the data is accessed from the cache. For example, CCM 114 can retrieve the data from cache 115 b .
  • CCM e.g., CCM 114
  • the method proceeds to step 508 , and a request to power on the cache is sent.
  • CCM 114 can send a request to power on cache 115 b to power manager 102 via control signal 106 c .
  • sub-power manager 104 c can then power on ASR 110 c to supply power to cache 115 b .
  • the method proceeds to step 510 , and the data is accessed from the cache (e.g., from cache 115 b ).
  • Embodiments of the present disclosure can be implemented in hardware, or as a combination of software and hardware. Consequently, embodiments of the disclosure may be implemented in the environment of a computer system or other processing system.
  • An example of such a computer system 600 is shown in FIG. 6 .
  • Modules depicted in FIGS. 1A-1C may execute on one or more computer systems 600 .
  • each of the steps of the processes depicted in FIGS. 2-5 can be implemented on one or more computer systems 600 .
  • Computer system 600 includes one or more processors, such as processor 604 .
  • Processor 604 can be a special purpose or a general purpose digital signal processor.
  • Processor 604 is connected to a communication infrastructure 602 (for example, a bus or network).
  • a communication infrastructure 602 for example, a bus or network.
  • Computer system 600 also includes a main memory 606 , preferably random access memory (RAM), and may also include a secondary memory 608 .
  • Secondary memory 608 may include, for example, a hard disk drive 610 and/or a removable storage drive 612 , representing a floppy disk drive, a magnetic tape drive, an optical disk drive, or the like.
  • Removable storage drive 612 reads from and/or writes to a removable storage unit 616 in a well-known manner.
  • Removable storage unit 616 represents a floppy disk, magnetic tape, optical disk, or the like, which is read by and written to by removable storage drive 612 .
  • removable storage unit 616 includes a computer usable storage medium having stored therein computer software and/or data.
  • secondary memory 608 may include other similar means for allowing computer programs or other instructions to be loaded into computer system 600 .
  • Such means may include, for example, a removable storage unit 618 and an interface 614 .
  • Examples of such means may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, a thumb drive and USB port, and other removable storage units 618 and interfaces 614 which allow software and data to be transferred from removable storage unit 618 to computer system 600 .
  • Computer system 600 may also include a communications interface 620 .
  • Communications interface 620 allows software and data to be transferred between computer system 600 and external devices. Examples of communications interface 620 may include a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, etc.
  • Software and data transferred via communications interface 620 are in the form of signals which may be electronic, electromagnetic, optical, or other signals capable of being received by communications interface 620 . These signals are provided to communications interface 620 via a communications path 622 .
  • Communications path 622 carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, an RF link and other communications channels.
  • computer program medium and “computer readable medium” are used to generally refer to tangible storage media such as removable storage units 616 and 618 or a hard disk installed in hard disk drive 610 . These computer program products are means for providing software to computer system 600 .
  • Computer programs are stored in main memory 606 and/or secondary memory 608 . Computer programs may also be received via communications interface 620 . Such computer programs, when executed, enable the computer system 600 to implement the present disclosure as discussed herein. In particular, the computer programs, when executed, enable processor 604 to implement the processes of the present disclosure, such as any of the methods described herein. Accordingly, such computer programs represent controllers of the computer system 600 . Where the disclosure is implemented using software, the software may be stored in a computer program product and loaded into computer system 600 using removable storage drive 612 , interface 614 , or communications interface 620 .
  • features of the disclosure are implemented primarily in hardware using, for example, hardware components such as application-specific integrated circuits (ASICs) and gate arrays.
  • ASICs application-specific integrated circuits
  • gate arrays gate arrays
  • signal processing functions described herein can be implemented in hardware, software, or some combination thereof.
  • signal processing functions can be implemented using computer processors, computer logic, application specific circuits (ASIC), digital signal processors, etc., as will be understood by those skilled in the art based on the discussion given herein. Accordingly, any processor that performs the signal processing functions described herein is within the scope and spirit of the present disclosure.
  • the above systems and methods may be implemented as a computer program executing on a machine, as a computer program product, or as a tangible and/or non-transitory computer-readable medium having stored instructions.
  • the functions described herein could be embodied by computer program instructions that are executed by a computer processor or any one of the hardware devices listed above.
  • the computer program instructions cause the processor to perform the signal processing functions described herein.
  • the computer program instructions e.g. software
  • Such media include a memory device such as a RAM or ROM, or other type of computer storage medium such as a computer disk or CD ROM. Accordingly, any tangible non-transitory computer storage medium having computer program code that cause a processor to perform the signal processing functions described herein are within the scope and spirit of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Power Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Mathematical Physics (AREA)
  • Logic Circuits (AREA)
  • Semiconductor Integrated Circuits (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Power Sources (AREA)

Abstract

Systems and methods are provided for efficiently managing power among system components. In an embodiment, a power manager receives information from subsystems and determines which subsystem components will require power to perform upcoming tasks. Based on this received information, the power manager can power on and power down individual subsystem components. Systems and methods according to embodiments of the present disclosure enable a cache of a subsystem to be powered on without requiring a power-up of every component of the subsystem. Thus, disclosed systems and methods enable a first subsystem to snoop into a cache of a second subsystem without requiring a full power-up of the second subsystem.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Patent Application No. 61/757,947, filed on Jan. 29, 2013.
  • FIELD OF THE INVENTION
  • This invention relates to power efficiency and more specifically to power management of a system having a cache memory.
  • BACKGROUND
  • Many electronic systems use power management schemes to efficiently allocate and manage power among various system components. Some systems include a power management unit (PMU) to monitor power supplied to different system components (e.g., to memories, processors, various hardware subsystems, and/or software). The PMU can receive information regarding which system components will need power to perform tasks and which system components can be powered down without negatively impacting system performance. Based or this information, the PMU can efficiently allocate power to the system so that the system can perform necessary tasks while efficiently using available power.
  • Power management schemes used by some conventional PMUs have significant disadvantages. For example, some conventional PMUs power down and power on entire subsystems as power needs of the overall system change. Powering up an entire subsystem can be inefficient, for example, when only a single subsystem component needs power to perform a task.
  • Embodiments of the present disclosure provide systems and methods for more efficiently managing power among components of a system.
  • BRIEF DESCRIPTION OF THE DRAWINGS/FIGURES
  • The accompanying drawings, which are incorporated in and constitute part of the specification, illustrate embodiments of the disclosure and, together with the general description given above and the detailed descriptions of embodiments given below, serve to explain the principles of the present disclosure. In the drawings:
  • FIG. 1A is a block diagram of a system for managing power in accordance with an embodiment of the present disclosure.
  • FIG. 1B is a block diagram of a system for managing power including a cache coherency module (CCM) in accordance with an embodiment of the present disclosure.
  • FIG. 1C is a block diagram illustrating a more detailed diagram of a system for managing power in accordance with an embodiment of the present disclosure.
  • FIG. 2 is a flowchart of a method for powering up components of a subsystem in accordance with an embodiment of the present disclosure.
  • FIG. 3 is a flowchart of a method for powering down components of a subsystem in accordance with an embodiment of the present disclosure.
  • FIG. 4 is a flowchart of a method for processing a request to power down a component of a subsystem in accordance with an embodiment of the present disclosure.
  • FIG. 5 is a flowchart of a method for processing a request to access stored data in accordance with an embodiment of the present disclosure.
  • FIG. 6 is a block diagram illustrating an example computer system that can be used to implement embodiments of the present disclosure.
  • Features and advantages of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.
  • DETAILED DESCRIPTION
  • In the following description, numerous specific details are set forth to provide a thorough understanding of the disclosure. However, it will be apparent to those skilled in the art that the disclosure, including structures, systems, and methods, may be practiced without these specific details. The description and representation herein are the common means used by those experienced or skilled in the art to most effectively convey the substance of their work to others skilled in the art. In other instances, well-known methods, procedures, components, and circuitry have not been described in detail to avoid unnecessarily obscuring aspects of the disclosure.
  • References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • For purposes of this discussion, the term “module” shall be understood to include one of software, or firmware, or hardware (such as circuits, microchips, processors, or devices, or any combination thereof), or any combination thereof. In addition, it will be understood that each module can include one, or more than one, component within an actual device, and each component that forms a part of the described module can function either cooperatively or independently of any other component forming a part of the module. Conversely, multiple modules described herein can represent a single component within an actual device. Further, components within a module can be in a single device or distributed among multiple devices in a wired or wireless manner.
  • 1. OVERVIEW
  • Embodiments of the present disclosure provide systems and methods to efficiently manage power among system components. In an embodiment, a power manager receives information from subsystems and determines which subsystem components will require power to perform upcoming tasks. Based on this received information, the power manager can power on and power down individual subsystem components. By powering up individual subsystem components instead of powering up an entire subsystem, the power manager can conserve power while still supplying enough power so that the upcoming tasks can be performed.
  • Embodiments of the present invention provide systems and methods for power-efficient use of cache memory (“cache”) across multiple subsystems. For example, systems and methods according to embodiments of the present disclosure enable a cache of a subsystem to be powered on without requiring a power-up of every component of the subsystem. Thus, disclosed systems and methods enable a first subsystem to snoop into a cache of a second subsystem without requiring a full power-up of the second subsystem.
  • 2. POWER MANAGER
  • FIG. 1A is a block diagram of a system for managing power in accordance with an embodiment of the present disclosure. FIG. 1A includes a power manager 102 coupled to two subsystems 108. Power manager 102 can be implemented using hardware, software, or a combination of hardware and software. In an embodiment, power manager 102 includes a dedicated processor (not shown) or hardware logic to process instructions for determining when to supply power to subsystems 108. In another embodiment, power manager 102 accesses another processor (e.g., a host processor) to process instructions for determining when to supply power to subsystems 108.
  • In an embodiment, subsystems 108 communicate with power manager 102 using control signals 106. In FIG. 1A, each subsystem 108 a and 108 c includes a plurality of subsystem components. For example, in an embodiment, these subsystem components comprise caches 115 and processor cores (“cores”) 118. Caches 115 a and 115 b can be used to temporarily store data for subsystems 115 a and 115 b. In an embodiment, cores 118 are individual cores of a multi-core processor. In another embodiment, each of cores 118 is a separate processor.
  • As shown in FIG. 1A, subsystems can have differing number of cores. For example, subsystem 108 a includes four cores ( cores 118 a, 118 b, 118 c, and 118 d), and subsystem 2 includes two cores (118 e and 118 f). Because subsystem 108 a has more cores than subsystem 108 c, subsystem 108 a is more powerful than subsystem 108 c but is also more power hungry than subsystem 108 c. While only cores 118 and cache 115 are shown as components of subsystems 108 in FIG. 1A, it should be understood that subsystems can have other components in accordance with embodiments of the present disclosure.
  • 2.1. Powering Up System Components
  • Power manager 102 manages power supplied to subsystems 108 based on received information about power needs of the system of FIG. 1A. For example, in an embodiment, power manager 102 can receive a notification whenever a system component (e.g., one of cores 118 or cache 115) will be needed to perform a task. For example, in an embodiment, power manager 102 can receive information regarding pending interrupts (such as hardware wakeup events) for cores 118. If, for example, power manager 102 determines that there is a pending interrupt for core 118 a, power manager 102 can initiate a power-up of core 118 a using control signal 106 b. Alternatively, in an embodiment, power manage 102 can receive an instruction from a host processor (not shown) to power on one or more of cores 118 or one or more of caches 115.
  • In an embodiment, subsystems 108 (or individual components of subsystems 108) can send a power-up request to power manager 102. For example, in an embodiment, subsystem 108 a can determine that one of its system components will be needed to perform a task, and subsystem 108 a can send a request to power manager 102 (e.g., via sending control signal 106 a to power manager 102 using a powered-up core) for the system component to be powered on. For example, core 118 a of subsystem 108 a can receive an interrupt input into core 118 a. After receiving the interrupt, subsystem 108 a can use a powered-up core to send a request via control signal 106 a to power manager 102 to power on core 118 a.
  • 2.2. Powering Down System Components
  • Power manager 102 can also initiate a powering down of subsystem components to conserve power when subsystem components are not needed to perform tasks. For example, in an embodiment, if core 118 a is finished performing a task, core 118 a can send a message to power manager 102 (e.g., via control signal 106 a) informing power manager 102 that core 118 a has finished performing a task. In an embodiment, this message can include a request for core 118 a to be shut down. It should be understood that, in an embodiment, power manager 102 can be informed that a subsystem component has finished performing a task from a source other than control signals 106.
  • After power manager 102 determines that a subsystem component has finished performing a task, power manager 102 can then determine, based on available information, whether the subsystem component should be shut down. For example, after receiving a shutdown request from core 118, power manager can determine whether core 118 a will be needed to perform additional tasks in the near future or whether core 118 a can be shut down to conserve power without negatively impacting system performance. For example, in an embodiment, power manager 102 can determine whether it is aware of any pending tasks that are scheduled to be processed using core 118 a. If no such tasks exist, power manager 102 can initiate a shutdown of core 118 a via control signal 106 b. Subsystem 108 a can receive control signal 106 b and can initiate the shutdown of core 118 a.
  • In an embodiment, power manager 102 can determine that a subsystem component should be shut down even if a task is pending for the subsystem component. For example, in an embodiment, power manager 102 can determine that a core (e.g., core 118 a) can be powered down and powered back up before the task is scheduled to be processed to conserve power. Alternatively, in an embodiment, power manager 102 can reassign the task to a different subsystem component (e.g., to another powered-up core, such as core 118 b).
  • 3. CACHE SNOOPING
  • As discussed above, caches 115 can be used to temporarily store data for subsystems 115 a and 115 b. Subsystems 108 can access data stored in caches 115 faster than data stored in an external memory (not shown). In an embodiment, one subsystem can request to access data stored in a cache of another subsystem. Such requests can be referred to as “cache snooping.” For example, a component of subsystem 108 c may request to snoop into cache 115 a of subsystem 108 a to access data because accessing data from cache 115 a is faster than accessing data from an external memory. Additionally, in an embodiment, accessing data from caches 115 causes less latency than accessing data from an external memory. For example, in an embodiment, core 118 e can send a request (e.g., via control signal 106 e) to access data stored in cache 115 a. Power manager 102 can then determine whether to power on cache 115 a.
  • In an embodiment, power manager 102 can initiate a power on of cache 115 a without powering up additional components of subsystem 108 a (e.g., without powering up one of cores 118 a, 118 b, 118 c, or 118 d) to enable subsystem 108 c to snoop into cache 115 a. By using this limited powering up technique, the system of FIG. 1A can conserve power. After subsystem 108 c has finished accessing cache 115 a, subsystem 108 c can notify power manager 102 that it has finished accessing cache 115 a and that cache 115 a can be powered down. For example, in an embodiment, core 118 e can send a request (e.g., via control line 106 e) to power down cache 115 a. If power manager 102 determines that cache 115 a is not needed to perform additional tasks, power manager can determine that cache 115 a can initiate powering down cache 115 a via control signal 106 b.
  • By powering up and powering down individual components of a subsystem instead of powering up and powering down an entire subsystem, embodiments of the present disclosure advantageously enable caches to remain powered even when other subsystem components have been shut down. For example, if cores 118 a, 118 b, and 118 c, and 118 d have been powered down, power manager 102 can still supply cache 115 a with power, enabling sub-system 108 c to snoop into cache 115 a to access data while core 118 e or core 118 f is being used to perform a task.
  • 3.1 Cache Coherency
  • Systems and methods according to embodiments of the present disclosure can be configured to ensure cache coherency among subsystems. For example, if copies of the same data are stored in both caches 115 a and 115 b, systems and methods according to embodiments of the present disclosure can ensure that changes to data are uniformly made to all copies of the data stored in caches.
  • FIG. 1B is a block diagram of a system for managing power including a cache coherency module (CCM) in accordance with an embodiment of the present disclosure. In FIG. 1B, CCM subsystem 108 b includes CCM 114, which ensures cache coherency among caches 115. As shown in FIG. 1B, CCM subsystem 108 b is coupled to subsystems 108 a and 108 c and also to power manager 102. CCM subsystem 108 b can communicate with power manager 102 using control signals 106 c and 106 d.
  • CCM 114 arbitrates requests to access data stored in caches 115. In an embodiment, CCM 114 includes a dedicated processor (not shown) or hardware logic to process instructions for arbitrating requests to access data stored in caches 115. In an embodiment, CCM 114 is notified when data is written to or read from caches 115, and CCM 114 records (or has access to) information regarding what data is stored in caches coupled to CCM subsystem 108 b (e.g., caches 115). Thus, in an embodiment, subsystems 108 are not required to know which data is stored in which cache before requesting access to stored data. Instead, subsystems 115 can send a request to access data to CCM 114, and CCM 114 can determine whether the data is stored in one of caches 115 or whether it should access the data from external memory. In an embodiment, if CCM 114 is not powered on, subsystems 108 can send a request to power manager 102 to power on CCM 114, and then subsystems 108 can send a request to access data to CCM 114.
  • For example, in an embodiment, a component of subsystem 108 c (e.g., core 118 e) sends a request to CCM 114 to access data. CCM 114 receives the request, and determines whether data is stored in a cache (e.g., in cache 115 a or 115 b). If the data is not stored in a cache, CCM 114 initiates a retrieval of the data from external memory. If the data is stored in a cache, CCM initiates a retrieval of the information from the cache (e.g., from cache 115 a). If the cache storing the data is not supplied with power, CCM 114 can send a request to power manager 102 to power on the cache so that the data can be read from the cache.
  • In an embodiment, CCM 114 is notified when data is written to a cache (e.g., to cache 115 a or 115 b). For example, if core 118 e wants to write data to cache 115 b, core 118 e first notifies CCM 114 that it is planning to write data to cache 115 b. In an embodiment, CCM 114 notifies other subsystems accessing the data that the data is going to be updated, and CCM 114 can also update copies of the data stored in other caches. Additionally, in an embodiment, CCM 114 can be required to approve the request to write data to a cache before the data is written. For example, in an embodiment, CCM 114 may determine that a task using the data should be allowed to finish before the data is updated. Alternatively, CCM 114 can be configured to notify a process in progress that it is using stale data that is being updated. The process may then complete using the updated data (or the process may restart using the updated data).
  • 4. POWER MANAGER INCLUDING SWITCHES AND SWITCHING REGULATORS
  • FIG. 1C is a block diagram illustrating a more detailed diagram of a system for managing power in accordance with an embodiment of the present disclosure. As shown in FIG. 1C, in an embodiment, power manager 102 can include sub-power managers for subsystems coupled to power manager 102. For example, as shown in FIG. 1B, power manager 102 includes sub-power managers 104 a, 104 b, and 104 c for subsystems 108 a, 108 b, and 108 c, respectively. In an embodiment, sub-power managers 104 can receive control signals 106 a, 106 c, and 106 e from subsystems 108 and can send control signals 106 b, 106 d, and 106 f to subsystems 108.
  • In FIG. 1C, subsystems 108 include switching regulators 110, phase-locked loops (PLLs) 112, and switches 116. For example, subsystem 108 a includes an adjustable switching regulator (ASR) 110 a coupled to PLL 112 a and switches 116 a, 116 b, 116 c, and 116 d. PLL 112 a provides a clock signal for subsystem 108 a. In an embodiment, ASR 110 a supplies power to cache 115 a and to cores 118 a, 118 b, 118 c, and 118 d via switches 116 a, 116 b, 116 c, and 116 d. As shown in FIG. 1C, each of switches 116 a, 116 b, 116 c, and 116 d is coupled to a respective core 118 a, 118 b, 118 c, and 118 d.
  • When sub-power manager 104 a determines that a core (e.g., core 118 a) should be powered down, sub-power manager 104 a can send a control signal (e.g., control signal 106 b) to the subsystem (e.g., subsystem 108 a). The control signal can instruct the subsystem and/or a switching regulator (e.g., ASR 110 a) to toggle a switch coupled to the core (e.g., ASR 110 a can toggle switch 116 a coupled to core 118 a) to cut off power from the core. If the sub-power manager determines that an entire subsystem should be powered down, the sub-power manager can stop supplying power to the switching regulator of the subsystem. For example, sub-power manager 104 a can stop supplying power to ASR 110 a to cut off power from subsystem 108 a.
  • When sub-power manager 104 a determines that a core (e.g., core 118 a) should be powered on, sub-power manager 104 a can send a control signal (e.g., control signal 106 b) to the subsystem (e.g., subsystem 108 a). The control signal can instruct the subsystem and/or a switching regulator (e.g., ASR 110 a) to toggle a switch coupled to the core (e.g., ASR 110 a can toggle switch 116 a coupled to core 118 a so that switch 116 a connects ASR 110 a to core 118 a) to supply power to the core. If sub-power manager 104 a determines that an entire subsystem should be powered on, sub-power manager 104 a can supply power to the switching regulator of the subsystem. For example, sub-power manager 104 a can supply power to ASR 110 a to supply power to subsystem 108 a.
  • In an embodiment, a cache of a subsystem is powered down when a subsystem is powered down, and a cache of a subsystem is powered on when the subsystem powers on. For example, in an embodiment, cache 115 a is powered down when subsystem 108 a is powered down, and cache 115 a is powered on when subsystem 108 a is powered on. However, it should be understood that in an embodiment, caches can be powered down and powered on without requiring a power down or power on of the entire subsystem. For example, in an embodiment, cache 115 a can be coupled to a dedicated switch (not shown), and ASR 110 a can toggle this dedicated switch to cut off power from cache 115 a or supply power to cache 115 a without requiring entire subsystem 115 a to be powered down or powered on.
  • As shown in FIG. 1C, CCM subsystem 108 b includes a cache switching regulator (CSR) 110 b coupled to a switch 116 e. CRS 110 b toggles switch 116 e on or off to supply power to CCM 114, and PLL 112 b supplies a clock signal for CCM 114. When sub-power manager 104 b determines that CCM subsystem 108 b should be powered down, sub-power manager 104 b can send a control signal (e.g., control signal 106 c) to CCM subsystem 108 b. The control signal instructs CCM subsystem 108 b and/or CSR 110 b to toggle switch 116 e coupled to CCM 114 to cut off power from CCM 114. When sub-power manager 104 b determines that CCM subsystem 108 b should be powered on, sub-power manager 104 b can send a control signal (e.g., control signal 106 c) to CCM subsystem 108 b instructing CSR 110 b to toggle switch 116 e coupled to CCM 114 to supply power to CCM 114.
  • In an embodiment, subsystem components and/or subsystems can send a message to power manager 102 and/or respective sub-power managers 104 when the subsystem components and/or subsystems have finished performing tasks. These messages can optionally include requests to power down the subsystem components and/or subsystems. For example, in an embodiment, cores 118 a, 118 b, 118 c, and 118 d can send a message to sub-power manager 104 a when cores 118 a,118 b, 118 c, and 118 d have finished performing tasks. If, after receiving this message, sub-power manager 104 a determines that any of cores 118 a,118 b, 118 c, and/or 118 d should be powered down, sub-power manager 104 a can initiate a powering down of cores 118 a,118 b, 118 c, and/or 118 d by sending a control signal (e.g., control signal 106 b) to ASR 110 a to instruct ASR 110 a to toggle switches 116 a, 116 b, 116 c, and/or 116 d to cut off power to cores 118 a,118 b, 118 c, and/or 118 d. In an embodiment, sub-power manager 104 a can determine whether any other system components need to access any of cores 118 a,118 b, 118 c, and/or 118 d before powering down any of cores 118 a,118 b, 118 c, and/or 118 d.
  • Additionally, for example, subsystem 108 a can send a message to power manager 102 when subsystem 108 a has finished performing tasks. For example, if cache 115 a is no longer being used, subsystem 108 a can send a message to sub-power manager 104 a requesting that subsystem 108 a be powered down. If, after receiving this message, sub-power manager 104 a determines that subsystem 108 a should be powered down, sub-power manager 104 a can initiate a powering down of subsystem 108 a by sending a control signal (e.g., control signal 106 b) to ASR 110 a to cut off power from ASR 110 a to power down subsystem 108 a. In an embodiment, sub-power manager 104 a can determine whether any other system components need to access subsystem 108 a before powering down subsystem 108 a.
  • In an embodiment, subsystems can also send a message to power manager 102 informing power manager 102 that they have finished performing tasks using components of other subsystems. For example, if subsystem 108 a finished accessing cache 115 b of subsystem 108 c, subsystem 108 a can send a message to power manager 102 informing power manager 102 that it is no longer accessing cache 115 b. In an embodiment, subsystem 108 a can send this message to sub-power manager 104 a, and sub-power manager 104 a can forward the message to sub-power manager 104 c. However, it should be understood that sub-power manager 104 a or power manager 102 can process this message in accordance with embodiments of the present disclosure. If, after receiving this message, power manager 102 determines that subsystem 108 c should be powered down (e.g., to cut off power from cache 115 b), power manager 102 can initiate a powering down of subsystem 108 c by sending a control signal (e.g., control signal 106 f) to ASR 110 c to cut off power from ASR 110 c to power down subsystem 108 c (and thus power down cache 115 b). In an embodiment, power manager 102 can determine whether any other system components need to access cache 115 b and/or other components of subsystem 108 c before powering down subsystem 108 c.
  • In an embodiment, CCM subsystem 108 b can also send a message to power manager 102 when CCM subsystem 108 b has finished performing tasks. For example, CCM subsystem 108 b can send a message to sub-power manager 104 b when CCM subsystem is no longer being used to arbitrate access to caches 115. If, after receiving this message, sub-power manager 104 b determines that CCM subsystem 108 b should be powered down, sub-power manager 104 b can initiate a powering down of CCM subsystem 108 b by sending a control signal (e.g., control signal 106 d) to CSR 110 b to cut off power from CSR 110 b to power down CCM subsystem 108 b (and thus power down CCM 114). In an embodiment, sub-power manager 104 b can determine whether any other system components need to access CCM 114 and/or other components of CCM subsystem 108 b before powering down CCM subsystem 108 b.
  • 5. SYSTEM LAYERING
  • Systems and methods according to embodiments of the present disclosure enable subsystems and/or subsystem components to be powered on in layers so that unused system components are not supplied with power. This layering concept provides an efficient, flexible approach to supplying power to various subsystem components. For example, in an embodiment, power manager 102 won't attempt to power down an entire subsystem while a subsystem component is still being used to perform a task. Instead, power manager 102 adopts a layered approach by first attempting to power down unused subsystem components. Then, once all subsystem components have finished performing tasks, power manager 102 determines whether to power down the subsystem. Finally, if all subsystems have finished performing tasks, power manager 102 determines whether to power down CCM subsystem 108 b (and thus power down CCM 114).
  • For example, in an embodiment, power manager 102 does not power down ASR 110 a (which, in an embodiment, supplies power to entire subsystem 108 a including cache 115 a) until all of cores 116 a, 116 b, 116 c, and 116 d have been powered down (e.g., via switches 116 a, 116 b, 116 c, and 116 d, respectively). Additionally, in an embodiment, power manager 102 does not power down CSR 110 b (which, in an embodiment, supplies power to entire subsystem 108 b including CCM 114) until both subsystem 108 a and 108 b have been powered down (e.g., via ASR 110 a and ASR 110 c, respectively).
  • In an embodiment, this layering concept can also extend to powering up subsystems and subsystem components. For example, in an embodiment, power manager 102 does not power on subsystem 108 a or subsystem 108 b until CCM subsystem has been powered on (e.g., by supplying power to CSR 110 b). Additionally, in an embodiment, power manager 102 does not power on any of cores 118 a, 118 b, 118 c, or 118 d until subsystem 108 a has been powered on (e.g., by supplying power to ASR 110 a).
  • In an embodiment caches in accordance with embodiments of the present disclosure (e.g., caches 115 a and/or 115 b) can be partitioned into multiple portions, and each portion of a cache can be powered down when not used to conserve power and powered up when needed. For example, in an embodiment, power manager 102 can send a message instructing a portion of cache 115 a to be powered down when this portion of cache 115 a is not needed. While a portion of cache 115 a is powered down, other portions of cache 115 a can still be powered on and accessed. When power manager 102 determines that a powered down portion of cache 115 a needs to be used to perform a task, power manager 102 can send a message instructing the powered down portion of cache 115 a to be powered on again.
  • For example, in an embodiment, cache 115 a can be split into a first portion and a second portion. If for example, core 118 e has finished accessing the first portion of cache 115 a, core 118 e can send a message to power manager 102 instructing power manager 102 that it has finished using the first portion of cache 115 a and that the first portion of cache 115 a can be powered down. If power manager 102 determines that no other subsystems need to access the first portion of cache 115 a, power 102 can send a message to ASR 110 a instructing ASR 110 a to cut off power to the first portion of cache 115 a. While the first portion of cache 115 a is powered down, the second portion of cache 115 a can still receive power from ASR 110 a and can still be accessed by other subsystem components. If, for example, core 118 f needs to access the first portion of cache 115 a, core 118 f can send a message to power manager 102 requesting that the first portion of cache 115 a be powered on. Power manager 102 can then send a message to ASR 110 a instructing ASR 110 a to supply power to the first portion of cache 115 a.
  • In an embodiment, the components of the system of FIG. 1A, the components of the system of FIG. 1B and/or the components of the system of FIG. 1C can be implemented on a single integrated circuit (IC). In another embodiment, some components of the systems of FIG. 1A, 1B and/or 1C are implemented using multiple ICs. For example, in an embodiment, power manager 102 and subsystems 108 are implemented on different ICs. Additionally, it should be understood that the components of the systems of FIGS. 1A, 1B, and/or 1C can be implemented using hardware, software, or a combination of hardware and software in accordance with embodiments of the present disclosure.
  • 6. METHODS
  • FIG. 2 is a flowchart of a method for powering up components of a subsystem in accordance with an embodiment of the present disclosure. In step 200, the CCM is powered on first. For example, sub-power manager 104 b can send a control signal (e.g., control signal 106 d) to CCM subsystem 108 b if power manager 102 determines that CCM subsystem 108 b is powered down. In step 202, a subsystem is powered on. For example, once power manager 102 determines that CCM subsystem 108 b has power, power manager can then power on a subsystem (e.g., subsystem 108 a or 108 b) so that the subsystem can be accessed. For example, in an embodiment, if subsystem 108 a is powered on, cache 115 a can be accessed. In step 204, a subsystem component is powered on. For example, in an embodiment, if sub-power manager 104 a determines that subsystem 108 a has power, sub-power manager 104 a can send control signal 106 b to ASR 110 a to instruct ASR 110 a to toggle switch 116 a to supply power to core 118 a so that core 118 a can be used to perform a task.
  • FIG. 3 is a flowchart of a method for powering down components of a system in accordance with an embodiment of the present disclosure. In step 300, a subsystem component is powered down. For example, sub-power manager 104 a can send control signal 106 b to ASR 110 a to instruct ASR 110 a to toggle switch 116 a to power down core 118 a. In step 302, a subsystem is powered down. For example, once sub-power manager 104 a has powered down cores 118 a, 118 b, 118 c, and 118 d, sub-power manager-104 a can determine to power down subsystem 108 a when sub-power manager receives a request to power down subsystem 108 a. In an embodiment, cache 115 a is also powered down when subsystem 108 a is powered down. In step 304, CCM 114 is powered down. For example, once sub-power manager 104 a has powered down subsystems 108 a and 108 c, sub-power manager-104 a can determine to power down CCM subsystem 108 b (and thus CCM 114) when sub-power manager receives a request to power down CCM subsystem 108 b.
  • FIG. 4 is a flowchart of a method for processing a request to power down a component of a system in accordance with an embodiment of the present disclosure. In step 400, a request to power down a system component is received. For example, sub-power manager 104 a can receive a request to power down core 118 a. In step 402, a determination is made regarding whether other system components need to access the system component. For example, sub-power manager 104 a can determine whether other system components need to access core 118 a (e.g., by determining whether an instruction is pending for core 118 a). If the power manager (e.g., power manager 102) determines that other system components need to access the system component, the method proceeds to step 404, and the system component is left on. For example, sub-power manager 104 a may determine to leave core 118 a powered on if sub-power manager 104 a determines that other system components need to access core 118 a. If the power manager (e.g., power manager 102) determines that other system components do not need to access the system component, the method proceeds to step 406, and the system component is powered down. For example, sub-power manager 104 a may determine to power down core 118 a if sub-power manager 104 a determines that other system components do not need to access core 118 a.
  • In an embodiment, if sub-power manager 104 a receives a request to power down cache 115 a and/or subsystem 108 a in step 400, sub-power manager 104 a determines whether other subsystem components need to access cache 115 a and/or subsystem 108 a in step 402. If sub-power manager 104 a determines that other system components need to access cache 115 a and/or subsystem 108 a, the method proceeds to step 404, and cache 115 a and/or subsystem 108 a is left on. If sub-power manager 104 a determines that other system components do not need to access cache 115 a and/or subsystem 108 a, the method proceeds to step 406, and cache 115 a and/or subsystem 108 a are powered down (e.g., by powering down ASR 110 a).
  • FIG. 5 is a flowchart of a method for processing a request to access stored data in accordance with an embodiment of the present disclosure. In step 500, a request to access data is received. For example, CCM 114 can receive a request from subsystem 108 a to access data. In step 502, a determination is made regarding whether the data is stored in a cache. For example, CCM 114 can determine whether the data is stored in cache 115 a or cache 115 b. If the CCM (e.g., CCM 114) determines that the data is not stored in cache, the method proceeds to step 506, and the data is accessed from external memory. For example, CCM 114 can send a request to external memory to access the data. If the CCM (e.g., CCM 114) determines that the data is stored in cache, the method proceeds to step 504, and a determination is made regarding whether the cache is powered on. For example, CCM 114 can determine that the data is stored in cache 115 b and can then determine whether cache 115 b is powered on.
  • In an embodiment, the CCM can send a request to power manager 102 to determine whether the cache is powered on. For example, in an embodiment, CCM 114 sends a request to power manager 102 via control signal 106 c to determine whether cache 115 b is powered on. In an embodiment, power manager 102 can respond to the CCM via control signal 106 d. If the CCM (e.g., CCM 114) determines that the cache is powered on, the method proceeds to step 510, and the data is accessed from the cache. For example, CCM 114 can retrieve the data from cache 115 b. If the CCM (e.g., CCM 114) determines that the cache is not powered on, the method proceeds to step 508, and a request to power on the cache is sent. For example, CCM 114 can send a request to power on cache 115 b to power manager 102 via control signal 106 c. In an embodiment, sub-power manager 104 c can then power on ASR 110 c to supply power to cache 115 b. Once the cache is powered on, the method proceeds to step 510, and the data is accessed from the cache (e.g., from cache 115 b).
  • 7. EXAMPLE COMPUTER SYSTEM ENVIRONMENT
  • It will be apparent to persons skilled in the relevant art(s) that various elements and features of the present disclosure, as described herein, can be implemented in hardware using analog and/or digital circuits, in software, through the execution of instructions by one or more general purpose or special-purpose processors, or as a combination of hardware and software.
  • The following description of a general purpose computer system is provided for the sake of completeness. Embodiments of the present disclosure can be implemented in hardware, or as a combination of software and hardware. Consequently, embodiments of the disclosure may be implemented in the environment of a computer system or other processing system. An example of such a computer system 600 is shown in FIG. 6. Modules depicted in FIGS. 1A-1C may execute on one or more computer systems 600. Furthermore, each of the steps of the processes depicted in FIGS. 2-5 can be implemented on one or more computer systems 600.
  • Computer system 600 includes one or more processors, such as processor 604. Processor 604 can be a special purpose or a general purpose digital signal processor. Processor 604 is connected to a communication infrastructure 602 (for example, a bus or network). Various software implementations are described in terms of this exemplary computer system. After reading this description, it will become apparent to a person skilled in the relevant art(s) how to implement the disclosure using other computer systems and/or computer architectures.
  • Computer system 600 also includes a main memory 606, preferably random access memory (RAM), and may also include a secondary memory 608. Secondary memory 608 may include, for example, a hard disk drive 610 and/or a removable storage drive 612, representing a floppy disk drive, a magnetic tape drive, an optical disk drive, or the like. Removable storage drive 612 reads from and/or writes to a removable storage unit 616 in a well-known manner. Removable storage unit 616 represents a floppy disk, magnetic tape, optical disk, or the like, which is read by and written to by removable storage drive 612. As will be appreciated by persons skilled in the relevant art(s), removable storage unit 616 includes a computer usable storage medium having stored therein computer software and/or data.
  • In alternative implementations, secondary memory 608 may include other similar means for allowing computer programs or other instructions to be loaded into computer system 600. Such means may include, for example, a removable storage unit 618 and an interface 614. Examples of such means may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, a thumb drive and USB port, and other removable storage units 618 and interfaces 614 which allow software and data to be transferred from removable storage unit 618 to computer system 600.
  • Computer system 600 may also include a communications interface 620. Communications interface 620 allows software and data to be transferred between computer system 600 and external devices. Examples of communications interface 620 may include a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, etc. Software and data transferred via communications interface 620 are in the form of signals which may be electronic, electromagnetic, optical, or other signals capable of being received by communications interface 620. These signals are provided to communications interface 620 via a communications path 622. Communications path 622 carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, an RF link and other communications channels.
  • As used herein, the terms “computer program medium” and “computer readable medium” are used to generally refer to tangible storage media such as removable storage units 616 and 618 or a hard disk installed in hard disk drive 610. These computer program products are means for providing software to computer system 600.
  • Computer programs (also called computer control logic) are stored in main memory 606 and/or secondary memory 608. Computer programs may also be received via communications interface 620. Such computer programs, when executed, enable the computer system 600 to implement the present disclosure as discussed herein. In particular, the computer programs, when executed, enable processor 604 to implement the processes of the present disclosure, such as any of the methods described herein. Accordingly, such computer programs represent controllers of the computer system 600. Where the disclosure is implemented using software, the software may be stored in a computer program product and loaded into computer system 600 using removable storage drive 612, interface 614, or communications interface 620.
  • In another embodiment, features of the disclosure are implemented primarily in hardware using, for example, hardware components such as application-specific integrated circuits (ASICs) and gate arrays. Implementation of a hardware state machine so as to perform the functions described herein will also be apparent to persons skilled in the relevant art(s).
  • 8. CONCLUSION
  • It is to be appreciated that the Detailed Description, and not the Abstract, is intended to be used to interpret the claims. The Abstract may set forth one or more but not all exemplary embodiments of the present disclosure as contemplated by the inventor(s), and thus, is not intended to limit the present disclosure and the appended claims in any way.
  • The present disclosure has been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
  • The foregoing description of the specific embodiments will so fully reveal the general nature of the disclosure that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present disclosure. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
  • Any representative signal processing functions described herein can be implemented in hardware, software, or some combination thereof. For instance, signal processing functions can be implemented using computer processors, computer logic, application specific circuits (ASIC), digital signal processors, etc., as will be understood by those skilled in the art based on the discussion given herein. Accordingly, any processor that performs the signal processing functions described herein is within the scope and spirit of the present disclosure.
  • The above systems and methods may be implemented as a computer program executing on a machine, as a computer program product, or as a tangible and/or non-transitory computer-readable medium having stored instructions. For example, the functions described herein could be embodied by computer program instructions that are executed by a computer processor or any one of the hardware devices listed above. The computer program instructions cause the processor to perform the signal processing functions described herein. The computer program instructions (e.g. software) can be stored in a tangible non-transitory computer usable medium, computer program medium, or any storage medium that can be accessed by a computer or processor. Such media include a memory device such as a RAM or ROM, or other type of computer storage medium such as a computer disk or CD ROM. Accordingly, any tangible non-transitory computer storage medium having computer program code that cause a processor to perform the signal processing functions described herein are within the scope and spirit of the present disclosure.
  • While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, and further the invention should be defined only in accordance with the following claims and their equivalents.

Claims (20)

What is claimed is:
1. A system, comprising:
a first subsystem, comprising:
a cache memory, and
a processor core configured to initiate sending a message indicating that the processor core has finished performing a task;
a cache coherency module (CCM) coupled to the cache memory; and
a power manager coupled to the first subsystem, wherein the power manager is configured to:
receive the message,
determine whether the processor core is needed to perform an additional task, and
in response to determining that the processor core is not needed to perform the additional task, initiate powering down of the processor core without powering down the cache memory.
2. The system of claim 1, wherein the first subsystem further comprises:
a switch coupled to the processor core, wherein the power manager is configured to initiate toggling of the switch to initiate powering down the processor core.
3. The system of claim 1, wherein the cache memory is partitioned into a plurality of portions, wherein the plurality of portions includes a first portion and a second portion, and wherein the power manager is further configured to:
initiate powering down of the first portion without powering down the second portion.
4. The system of claim 1, further comprising:
a second subsystem coupled to:
the power manager, and
the CCM.
5. The system of claim 4, wherein the second subsystem comprises:
a switching regulator;
a phase-locked loop (PLL) coupled to the switching regulator;
a second cache memory coupled to:
the switching regulator, and
the PLL;
a first switch coupled to the switching regulator;
a second processor core coupled to the first switch;
a second switch coupled to the switching regulator; and
a third processor core coupled to the second switch.
6. The system of claim 1, wherein the power manager comprises:
a first sub-power manager coupled to the first subsystem; and
a second sub-power manager coupled to the CCM.
7. The system of claim 1, wherein the power manager is further configured to:
in response to initiating the powering down of the processor core, determine whether the first subsystem is needed to perform the additional task; and
in response to determining that the first subsystem is not needed to perform the additional task, initiate powering down of the first subsystem.
8. The system of claim 1, wherein the power manager is further configured to:
in response to initiating the powering down of the first subsystem, determine whether the CCM is needed to perform the additional task; and
in response to determining that the CCM is not needed to perform the additional task, initiate powering down of the CCM.
9. The system of claim 1, wherein the power manager is further configured to:
receive a request to power on the cache memory; and
in response to receiving the request to power on the cache memory, initiate powering up the first subsystem.
10. The system of claim 1, wherein the power manager is further configured to:
receive a request to power on the processor core;
in response to receiving the request to power on the processor core, determine whether the first subsystem is powered on;
in response to determining that the first subsystem is powered on:
initiate powering up the processor core; and
in response to determining that the first subsystem is not powered on:
initiate powering up the first subsystem, and
initiate powering up the processor core.
11. The system of claim 1, wherein the CCM is configured to:
receive a request to access data;
determine whether the data is stored in the cache memory;
in response to determining that the data is not stored in the cache memory, access data from external memory; and
in response to determining that the data is stored in the cache memory, initiate accessing the data.
12. The system of claim 11, wherein the CCM is further configured to:
determine whether the cache memory is powered on; and
in response to determining that the cache memory is not powered on, initiate sending a request to the power manager to power on the cache memory.
13. The system of claim 11, further comprising:
a second subsystem, comprising a second cache memory coupled to the CCM, wherein the CCM is further configured to:
determine whether the data is stored in the second cache memory.
14. The system of claim 13, wherein the CCM is further configured to:
determine whether at least a portion of the second cache memory is powered on; and
in response to determining that at least a portion of the second cache memory is not powered on, initiate sending a request to the power manager to power on at least a portion of the second cache memory.
15. A system, comprising:
a first subsystem, comprising:
a first subsystem component configured to send a message indicating that the first subsystem component has finished performing a task, and
a second subsystem component;
a power manager coupled to the first subsystem, wherein the power manager is configured to:
receive the message,
determine whether the first subsystem component is needed to perform an additional task, and
in response to determining that the first subsystem component is not needed to perform the additional task, initiate powering down of the first subsystem component without powering down the second subsystem component.
16. The system of claim 15, wherein the first subsystem component is a processor core.
17. The system of claim 15, further comprising:
a cache coherency module (CCM) coupled to the second subsystem component, wherein the second subsystem component is a cache memory.
18. A method, comprising:
receiving, using a power managing device, a request to power down a first component of a first subsystem;
determining, using the power managing device, whether the first component is needed to perform a task for a second subsystem; and
in response to determining that the first component is not needed to perform the task for the second subsystem, initiating, using the power managing device, powering down the first component without powering down a cache memory of the first subsystem.
19. The method of claim 18, further comprising:
determining whether the cache memory is needed to perform the task; and
in response to determining that the cache memory is not needed to perform the task, initiating powering down the first subsystem.
20. The method of claim 19, further comprising:
determining whether the second subsystem is needed to perform a second task; and
in response to determining that the second subsystem is not needed to perform the second task, initiating powering down the second subsystem.
US14/026,885 2013-01-29 2013-09-13 Low Power Control for Multiple Coherent Masters Abandoned US20140215252A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/026,885 US20140215252A1 (en) 2013-01-29 2013-09-13 Low Power Control for Multiple Coherent Masters

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361757947P 2013-01-29 2013-01-29
US14/026,885 US20140215252A1 (en) 2013-01-29 2013-09-13 Low Power Control for Multiple Coherent Masters

Publications (1)

Publication Number Publication Date
US20140215252A1 true US20140215252A1 (en) 2014-07-31

Family

ID=51222233

Family Applications (5)

Application Number Title Priority Date Filing Date
US13/849,115 Expired - Fee Related US9000805B2 (en) 2013-01-29 2013-03-22 Resonant inductor coupling clock distribution
US14/027,068 Expired - Fee Related US9170769B2 (en) 2013-01-29 2013-09-13 Crosstalk mitigation in on-chip interfaces
US14/026,885 Abandoned US20140215252A1 (en) 2013-01-29 2013-09-13 Low Power Control for Multiple Coherent Masters
US14/026,985 Abandoned US20140215233A1 (en) 2013-01-29 2013-09-13 Power Management System Using Blocker Modules Coupled to a Bus
US14/626,445 Expired - Fee Related US9264046B2 (en) 2013-01-29 2015-02-19 Resonant inductor coupling clock distribution

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US13/849,115 Expired - Fee Related US9000805B2 (en) 2013-01-29 2013-03-22 Resonant inductor coupling clock distribution
US14/027,068 Expired - Fee Related US9170769B2 (en) 2013-01-29 2013-09-13 Crosstalk mitigation in on-chip interfaces

Family Applications After (2)

Application Number Title Priority Date Filing Date
US14/026,985 Abandoned US20140215233A1 (en) 2013-01-29 2013-09-13 Power Management System Using Blocker Modules Coupled to a Bus
US14/626,445 Expired - Fee Related US9264046B2 (en) 2013-01-29 2015-02-19 Resonant inductor coupling clock distribution

Country Status (1)

Country Link
US (5) US9000805B2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140181559A1 (en) * 2012-12-26 2014-06-26 Robert Gough Supporting runtime d3 and buffer flush and fill for a peripheral component interconnect device
US20160048193A1 (en) * 2014-08-18 2016-02-18 Xilinx, Inc. Sub-system power management control
US20170010655A1 (en) * 2015-07-08 2017-01-12 Apple Inc. Power Management of Cache Duplicate Tags
CN107924170A (en) * 2016-03-14 2018-04-17 欧姆龙株式会社 Transferring device, the control method of transferring device, control program and record media
US20190065372A1 (en) * 2017-08-23 2019-02-28 Qualcomm Incorporated Providing private cache allocation for power-collapsed processor cores in processor-based systems
US20200264788A1 (en) * 2019-02-15 2020-08-20 Qualcomm Incorporated Optimal cache retention mechanism

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9172383B2 (en) 2013-01-29 2015-10-27 Broadcom Corporation Induction-coupled clock distribution for an integrated circuit
US9330740B1 (en) * 2013-12-18 2016-05-03 Altera Corporation First-in first-out circuits and methods
US10520974B2 (en) * 2015-06-22 2019-12-31 Northrop Grumman Systems Corporation Clock distribution system
CN106657338B (en) * 2016-12-26 2020-08-07 华东理工大学 Power supply centralized monitoring system and monitoring method thereof
US20180275714A1 (en) * 2017-03-24 2018-09-27 Integrated Device Technology, Inc. Inductive coupling for data communication in a double data rate memory system
US10461804B2 (en) 2018-01-25 2019-10-29 Western Digital Technologies, Inc. Elimination of crosstalk effects in non-volatile storage
US10884450B2 (en) 2018-03-06 2021-01-05 Northrop Grumman Systems Corporation Clock distribution system
US10643732B2 (en) * 2018-03-22 2020-05-05 Western Digital Technologies, Inc. Determining line functionality according to line quality in non-volatile storage
US11556769B2 (en) 2019-04-29 2023-01-17 Massachusetts Institute Of Technology Superconducting parametric amplifier neural network
US10754371B1 (en) 2019-11-13 2020-08-25 Northrop Grumman Systems Corporation Capacitive clock distribution system
US11231742B1 (en) 2021-03-08 2022-01-25 Northrop Grumman Systems Corporation Clock distribution resonator system
US11429135B1 (en) 2021-03-11 2022-08-30 Northrop Grumman Systems Corporation Clock distribution system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060176040A1 (en) * 2005-01-05 2006-08-10 Fyre Storm, Inc. Low power method of monitoring and of responsively initiating higher powered intelligent response to detected change of condition
US20080307244A1 (en) * 2007-06-11 2008-12-11 Media Tek, Inc. Method of and Apparatus for Reducing Power Consumption within an Integrated Circuit
US20100185821A1 (en) * 2009-01-21 2010-07-22 Arm Limited Local cache power control within a multiprocessor system
US20110213993A1 (en) * 2010-03-01 2011-09-01 Peter Richard Greenhalgh Data processing apparatus and method for transferring workload between source and destination processing circuitry
US20140189411A1 (en) * 2013-01-03 2014-07-03 Apple Inc. Power control for cache structures

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4622630A (en) * 1983-10-28 1986-11-11 Data General Corporation Data processing system having unique bus control protocol
US4987529A (en) * 1988-08-11 1991-01-22 Ast Research, Inc. Shared memory bus system for arbitrating access control among contending memory refresh circuits, peripheral controllers, and bus masters
US5313582A (en) * 1991-04-30 1994-05-17 Standard Microsystems Corporation Method and apparatus for buffering data within stations of a communication network
US5659690A (en) * 1992-10-15 1997-08-19 Adaptec, Inc. Programmably configurable host adapter integrated circuit including a RISC processor
GB9226522D0 (en) * 1992-12-19 1993-02-10 Harvey Geoffrey P Power saving electronic logic circuit
US5519883A (en) * 1993-02-18 1996-05-21 Unisys Corporation Interbus interface module
DE4436553A1 (en) * 1994-10-13 1996-04-18 Philips Patentverwaltung Power supply facility
US6532544B1 (en) * 1999-11-08 2003-03-11 International Business Machines Corporation High gain local clock buffer for a mesh clock distribution utilizing a gain enhanced split driver clock buffer
US6701390B2 (en) * 2001-06-06 2004-03-02 Koninklijke Philips Electronics N.V. FIFO buffer that can read and/or write multiple and/or selectable number of data words per bus cycle
US7636834B2 (en) * 2002-03-01 2009-12-22 Broadcom Corporation Method and apparatus for resetting a gray code counter
US7155618B2 (en) * 2002-03-08 2006-12-26 Freescale Semiconductor, Inc. Low power system and method for a data processing system
US20030221030A1 (en) * 2002-05-24 2003-11-27 Timothy A. Pontius Access control bus system
JP2004192021A (en) * 2002-12-06 2004-07-08 Renesas Technology Corp Microprocessor
US7793005B1 (en) * 2003-04-11 2010-09-07 Zilker Labs, Inc. Power management system using a multi-master multi-slave bus and multi-function point-of-load regulators
US7240130B2 (en) * 2003-06-12 2007-07-03 Hewlett-Packard Development Company, L.P. Method of transmitting data through an 12C router
US6882182B1 (en) * 2003-09-23 2005-04-19 Xilinx, Inc. Tunable clock distribution system for reducing power dissipation
US7317264B2 (en) * 2003-11-25 2008-01-08 Eaton Corporation Method and apparatus to independently control contactors in a multiple contactor configuration
US7254677B1 (en) * 2004-05-04 2007-08-07 Xilinx, Inc. First-in, first-out memory system with reduced cycle latency
CN101385298A (en) * 2006-02-13 2009-03-11 Nxp股份有限公司 Data communication method, data transmission and reception device and system
US20070288731A1 (en) * 2006-06-08 2007-12-13 Bradford Jeffrey P Dual Path Issue for Conditional Branch Instructions
JP2010511942A (en) * 2006-12-01 2010-04-15 ザ・リージェンツ・オブ・ザ・ユニバーシティ・オブ・ミシガン Clock distribution network architecture for resonant clocked systems
US8405617B2 (en) * 2007-01-03 2013-03-26 Apple Inc. Gated power management over a system bus
US8954017B2 (en) * 2011-08-17 2015-02-10 Broadcom Corporation Clock signal multiplication to reduce noise coupled onto a transmission communication signal of a communications device
JP5775398B2 (en) * 2011-08-25 2015-09-09 ルネサスエレクトロニクス株式会社 Semiconductor integrated circuit device
US20140035649A1 (en) * 2012-07-31 2014-02-06 Fujitsu Limited Tuned resonant clock distribution system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060176040A1 (en) * 2005-01-05 2006-08-10 Fyre Storm, Inc. Low power method of monitoring and of responsively initiating higher powered intelligent response to detected change of condition
US20080307244A1 (en) * 2007-06-11 2008-12-11 Media Tek, Inc. Method of and Apparatus for Reducing Power Consumption within an Integrated Circuit
US20100185821A1 (en) * 2009-01-21 2010-07-22 Arm Limited Local cache power control within a multiprocessor system
US20110213993A1 (en) * 2010-03-01 2011-09-01 Peter Richard Greenhalgh Data processing apparatus and method for transferring workload between source and destination processing circuitry
US20140189411A1 (en) * 2013-01-03 2014-07-03 Apple Inc. Power control for cache structures

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9244521B2 (en) * 2012-12-26 2016-01-26 Intel Corporation Supporting runtime D3 and buffer flush and fill for a peripheral component interconnect device
US9746910B2 (en) 2012-12-26 2017-08-29 Intel Corporation Supporting runtime D3 and buffer flush and fill for a peripheral component interconnect device
US20140181559A1 (en) * 2012-12-26 2014-06-26 Robert Gough Supporting runtime d3 and buffer flush and fill for a peripheral component interconnect device
KR102386662B1 (en) * 2014-08-18 2022-04-13 자일링크스 인코포레이티드 Sub-system power management control
US20160048193A1 (en) * 2014-08-18 2016-02-18 Xilinx, Inc. Sub-system power management control
EP3183662B1 (en) * 2014-08-18 2023-10-25 Xilinx, Inc. Sub-system power management control
CN106575276A (en) * 2014-08-18 2017-04-19 赛灵思公司 Sub-system power management control
KR20170044677A (en) * 2014-08-18 2017-04-25 자일링크스 인코포레이티드 Sub-system power management control
US9696789B2 (en) * 2014-08-18 2017-07-04 Xilinx, Inc. Sub-system power management control
US9823730B2 (en) * 2015-07-08 2017-11-21 Apple Inc. Power management of cache duplicate tags
US20170010655A1 (en) * 2015-07-08 2017-01-12 Apple Inc. Power Management of Cache Duplicate Tags
US20180239338A1 (en) * 2016-03-14 2018-08-23 Omron Corporation Relay device, control method for relay device, and non-transitory computer-readable recording medium
US10962959B2 (en) * 2016-03-14 2021-03-30 Omron Corporation Relay, control method, and non-transitory computer-readable recording medium for power supply control
CN107924170A (en) * 2016-03-14 2018-04-17 欧姆龙株式会社 Transferring device, the control method of transferring device, control program and record media
US20190065372A1 (en) * 2017-08-23 2019-02-28 Qualcomm Incorporated Providing private cache allocation for power-collapsed processor cores in processor-based systems
US10482016B2 (en) * 2017-08-23 2019-11-19 Qualcomm Incorporated Providing private cache allocation for power-collapsed processor cores in processor-based systems
US20200264788A1 (en) * 2019-02-15 2020-08-20 Qualcomm Incorporated Optimal cache retention mechanism

Also Published As

Publication number Publication date
US20140210518A1 (en) 2014-07-31
US20150162914A1 (en) 2015-06-11
US20140215233A1 (en) 2014-07-31
US9264046B2 (en) 2016-02-16
US20140215104A1 (en) 2014-07-31
US9170769B2 (en) 2015-10-27
US9000805B2 (en) 2015-04-07

Similar Documents

Publication Publication Date Title
US20140215252A1 (en) Low Power Control for Multiple Coherent Masters
US10671133B2 (en) Configurable power supplies for dynamic current sharing
US9430323B2 (en) Power mode register reduction and power rail bring up enhancement
US10679690B2 (en) Method and apparatus for completing pending write requests to volatile memory prior to transitioning to self-refresh mode
JP5905408B2 (en) Multi-CPU system and computing system including the same
EP2805243B1 (en) Hybrid write-through/write-back cache policy managers, and related systems and methods
US20140173311A1 (en) Methods and Systems for Operating Multi-Core Processors
EP2661661B1 (en) Method and system for managing sleep states of interrupt controllers in a portable computing device
US11514955B2 (en) Power management integrated circuit with dual power feed
CN110488673B (en) Data processing module and data processing method in low power consumption mode
CN110716633B (en) Device and method for coordinately managing SSD power consumption, computer device and storage medium
JP2015064676A (en) Information processing device, semiconductor device, information processing method, and program
US20050060591A1 (en) Information processor, program, storage medium, and control circuit
US10691195B2 (en) Selective coupling of memory to voltage rails based on operating mode of processor
TW201339820A (en) Adaptive voltage scaling using a serial interface
CN111142644A (en) Hard disk operation control method and device and related components
KR20190048204A (en) Method of operating system on chip, system on chip performing the same and electronic system including the same
JP2019510284A (en) Use of volatile memory as non-volatile memory
CN104460938B (en) System-wide power conservation method and system using memory cache
US20150177816A1 (en) Semiconductor integrated circuit apparatus
US20170220354A1 (en) Server node shutdown
EP3356910B1 (en) Power-aware cpu power grid design
CN112579005B (en) Method, device, computer equipment and storage medium for reducing average power consumption of SSD
US11442522B2 (en) Method of controlling performance boosting of semiconductor device based on at least user input and feedback from previous boosting policies and semiconductor device performing the method
CN112148365B (en) Control module, method and microcontroller chip

Legal Events

Date Code Title Description
AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FULLERTON, MARK;PATEL, RONAK;CHEN, TIMOTHY;AND OTHERS;SIGNING DATES FROM 20130909 TO 20130913;REEL/FRAME:031206/0311

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001

Effective date: 20170119