WO2012145212A2 - Method and system for thermal load management in a portable computing device - Google Patents

Method and system for thermal load management in a portable computing device Download PDF

Info

Publication number
WO2012145212A2
WO2012145212A2 PCT/US2012/033192 US2012033192W WO2012145212A2 WO 2012145212 A2 WO2012145212 A2 WO 2012145212A2 US 2012033192 W US2012033192 W US 2012033192W WO 2012145212 A2 WO2012145212 A2 WO 2012145212A2
Authority
WO
WIPO (PCT)
Prior art keywords
thermal
processing area
temperature
thermal energy
core
Prior art date
Application number
PCT/US2012/033192
Other languages
French (fr)
Other versions
WO2012145212A3 (en
Inventor
Jon J. Anderson
Sumit Sur
Jeffrey A. NIEMANN
James M. ARTMEIER
Original Assignee
Qualcomm Incorporated
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Incorporated filed Critical Qualcomm Incorporated
Priority to CN201280019740.9A priority Critical patent/CN103582857B/en
Priority to EP12716927.4A priority patent/EP2699977A2/en
Priority to KR1020137030978A priority patent/KR101529419B1/en
Priority to JP2014506456A priority patent/JP6059204B2/en
Publication of WO2012145212A2 publication Critical patent/WO2012145212A2/en
Publication of WO2012145212A3 publication Critical patent/WO2012145212A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/324Power saving characterised by the action undertaken by lowering clock frequency
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/20Cooling means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/20Cooling means
    • G06F1/206Cooling means comprising thermal management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/3296Power saving characterised by the action undertaken by lowering the supply or operating voltage
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • Portable computing devices are becoming necessities for people on personal and professional levels. These devices may include cellular telephones, portable digital assistants (PDAs), portable game consoles, palmtop computers, and other portable electronic devices.
  • PDAs portable digital assistants
  • portable game consoles portable game consoles
  • palmtop computers portable electronic devices
  • PCDs typically do not have active cooling devices, like fans, which are often found in larger computing devices such as laptop and desktop computers. Instead of using fans, PCDs may rely on the spatial arrangement of electronic packaging so that two or more active and heat producing components are not positioned in close proximity to one another. When two or more heat producing components are suitably spaced from one another within a PCD, then heat generated from the operation of each component may not negatively impact the operation of the other. Moreover, when a heat producing component within a PCD is physically isolated from other components within the device, the heat generated from the operation of the heat producing component may not negatively impact other surrounding electronics. Many PCDs may also rely on passive cooling devices, such as heat sinks, to manage thermal energy among the electronic components which collectively form a respective PCD.
  • active cooling devices like fans
  • PCDs are typically limited in size and, therefore, room for components within a PCD often comes at a premium. As such, there just typically isn't enough space within a PCD for engineers and designers to mitigate thermal degradation or failure through the leveraging of spatial arrangements or placement of passive cooling components.
  • the operating system is designed to cool the PCD by simply shutting down most of the electronic components within the PCD which are generating the excessive thermal energy. While shutting down electronics may be an effective measure for avoiding the generation of excessive thermal energy within a PCD, such drastic measures inevitably impact performance of a PCD and, in some cases, may even render a PCD functionally inoperable for a period time.
  • Various embodiments of methods and systems for controlling and/or managing thermal energy generation on a portable computing device are disclosed. Because temperature readings may correlate to a process load within a thermal energy generating component, one such method involves placing a temperature sensor proximate to a thermal energy generating component of a chip in a portable computing device and then monitoring, at a first rate, temperature readings generated by the temperature sensor. Based on the detection of a first monitored temperature reading which may indicate that a processing area within the component, such as a high power density sub-processor area, has exceeded a temperature threshold, the method reallocates a portion of the process load running on the first processing area of the component to a second processing area of the component.
  • reallocation of the process load portion serves to lower the amount of energy generated in any unit area of the component over a unit of time.
  • QoS quality of service
  • Exemplary methods may further comprise steps for subsequent reallocation of the process load from the second processing area to the first processing area when a second monitored temperature reading indicates that the component has cooled.
  • the QoS associated with the portable computing device can be returned to preferred levels.
  • Exemplary embodiments leverage temperature sensors strategically placed within a PCD near known thermal energy producing components such as, but not limited to, central processing unit (“CPU”) cores, graphical processing unit (“GPU”) cores, power management integrated circuits ("PMIC” or “PMICs”), power amplifiers, etc. Temperature signals generated by the sensors may be monitored and used to trigger drivers running on the processing units to cause the reallocation of processing loads correlating with a given component's excessive generation of thermal energy.
  • the processing load reallocation is mapped according to parameters associated with pre-identified thermal load scenarios.
  • the processing load reallocation occurs in real time, or near real time, according to thermal management solutions generated by a thermal management algorithm that may consider CPU and/or GPU performance specifications along with real time temperature sensor data.
  • FIG. 1 is a functional block diagram illustrating an embodiment of a computer system for simulating thermal load distributions in a portable computing device
  • PCD PCD
  • FIG. 2 is a logical flowchart illustrating an embodiment of a method for generating the thermal load steering table of FIG. 1 for use by the PCD to control the distribution of thermal load;
  • FIG. 3 is a data diagram illustrating an embodiment of the thermal load steering table of FIG. 1;
  • FIG. 4 A is an overhead schematic diagram of the spatial arrangement of an exemplary integrated circuit illustrating a thermal load distribution under a simulated workload;
  • FIG. 4B illustrates the integrated circuit of FIG. 4A in which the thermal load distribution is distributed to a location closer to a thermal sensor according to the thermal load steering parameters in the thermal load steering table of FIG. 3;
  • FIG. 5 is a logical flowchart illustrating an embodiment of a method for controlling thermal load distribution in the PCD of FIG. 1;
  • FIG. 6 is a functional block diagram illustrating an exemplary embodiment of the PCD of FIG. 1;
  • FIG. 7A is a functional block diagram illustrating an exemplary spatial arrangement of hardware for the chip illustrated in FIG. 6;
  • FIG. 7B is a schematic diagram illustrating an exemplary software architecture of the PCD of FIG. 6 for supporting dynamic voltage and frequency scaling ("DVFS”) algorithms;
  • DVFS dynamic voltage and frequency scaling
  • FIG. 7C is a first table listing exemplary frequency values for two DVFS algorithms
  • FIG. 7D is a second table listing exemplary frequency and voltage pairs for two DVFS algorithms
  • FIG. 8 is an exemplary state diagram that illustrates various thermal policy states that may be managed by the thermal policy manager in the PCD of FIG. 1;
  • FIG. 9 is a diagram illustrating exemplary thermal mitigation techniques that may be applied or ordered by the thermal policy manager.
  • FIG. 10 is a diagram illustrating an exemplary graph of temperature versus time and corresponding thermal policy states
  • FIGs. 11 A & 1 IB are logical flowcharts illustrating a method for managing one or more thermal policies
  • FIG. 12 is a logical flowchart illustrating a sub-method or subroutine for applying process load reallocation thermal mitigation techniques
  • FIG. 13A is a schematic diagram for a four-core multi-core processor and different workloads that may be spatially managed with the multi-core processor;
  • FIG. 13B is a schematic diagram for a four-core multi-core processor and thermal energy dissipation hotspots that may be managed from process load reallocation algorithms with the multi-core processor; and FIG. 14 is a functional block diagram illustrating an exemplary spatial arrangement of hardware for the chip illustrated in FIG. 6 and exemplary components external to the chip illustrated in FIG. 6.
  • an “application” may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches.
  • an “application” referred to herein may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed.
  • content may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches.
  • content as referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed.
  • a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • an application running on a computing device and the computing device may be a component.
  • One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers.
  • these components may execute from various computer readable media having various data structures stored thereon.
  • the components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal).
  • a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal).
  • the terms "communication device,” “wireless device,” “wireless telephone,” “wireless communication device” and “wireless handset” are used interchangeably.
  • 3G third generation
  • 4G fourth generation
  • CPU central processing unit
  • DSP digital signal processor
  • thermo and “thermal energy” may be used in association with a device or component capable of generating or dissipating energy that can be measured in units of "temperature.” Consequently, it will further be understood that the term “temperature,” with reference to some standard value, envisions any measurement that may be indicative of the relative warmth, or absence of heat, of a “thermal energy” generating device or component. For example, the "temperature” of two components is the same when the two components are in “thermal” equilibrium.
  • processing burden or percentage of processing burden, associated with a given processing component in a given embodiment.
  • processing component or “thermal energy generating component” may be, but is not limited to, a central processing unit, a graphical processing unit, a core, a main core, a sub-core, a processing area, a hardware engine, etc. or any component residing within, or external to, an integrated circuit within a portable computing device.
  • thermal load or “thermal distribution,” “thermal signature,” “thermal processing load” and the like are indicative of workload burdens that may be running on a processing component, one of ordinary skill in the art will acknowledge that use of these "thermal” terms in the present disclosure may be related to process load distributions and burdens.
  • PCD portable computing device
  • 3G third generation
  • a PCD may be a cellular telephone, a satellite telephone, a pager, a PDA, a smartphone, a navigation device, a smartbook or reader, a media player, a combination of the aforementioned devices, a laptop computer with a wireless connection, among others.
  • FIG. 1 illustrates an embodiment of a computer system for implementing various features related to thermal load management or "steering" in a PCD 100.
  • the computer system employs two main phases: (1) a simulation phase performed by a simulation computer 10; and (2) an operational phase performed by a PCD 100.
  • the simulation phase involves simulating thermal loads to be experienced by an integrated circuit 102 during operation of the PCD 100.
  • the simulation computer 10 identifies thermal load conditions produced by the PCD 100 under simulated workloads.
  • the simulated workloads may be associated with the running of a specific application or "use case" on a given PCD 100 or, alternatively, may not be associated with any specific or predictable processing load scenario.
  • the simulation computer 10 may determine that a simulated thermal load
  • thermal energy generation can be mitigated by reallocation of processing load across complimentary components.
  • the simulation computer 10 improves the PCD 100 performance and user experience by "steering" or reallocating all or a portion of the processing load from a first simulated location on the silicon die to a second simulated location that is available for processing.
  • the second simulated location may be represented in commands, instructions, or any other suitable computer readable data (referred to as "thermal load steering parameter(s)") that may be provided to and used by the PCD 100 during the operational phase to steer the processing load to the second simulated location.
  • the preferred proximity of a likely hotspot to a sensor may be within a 5 degree Celsius range. That is, because temperature associated with a heat wave which has propagated from a hotspot will be lower as the distance to the hotspot is increased, and because there is inevitably a time lag between the time a hotspot begins to occur and the time that a temperature increase may be detected at a distance away from the hotspot, it may be preferred in some embodiments that a temperature sensor be placed at a distance from a hotspot that is predicted to correlate with a 5 degree Celsius drop in temperature.
  • the sensors may be located closer to, or farther away from, a known hotspot or thermal energy generating component than 5 °C.
  • the simulation computer 10 comprises one or more
  • the memory 14 comprises a computer model 22 of the integrated circuit 102 used in the PCD 100.
  • the computer model 22 is a data representation of the various hardware and software components in the PCD 100 and the spatial arrangement, architecture, and operation of the various components of the integrated circuit 102, including, for example, thermal sensors 157 and a CPU 110.
  • a detailed exemplary embodiment of a PCD 100 is described below in more detail with reference to FIGS. 6, 7 A, 7B and 14. It should be appreciated that any PCD 100 and/or integrated circuit 102 may be modeled and represented in the computer model 22 provided to the simulation computer 10.
  • the computer model 22 may comprise information such as, but not limited to, dimensions, size, and make-up of the printed circuit board ("PCB") stack, the amount of metal in traces, the sizes of the traces, the use of thermal bias, power load per sub block of the silicon die, power load per component on the PCB, use case specifics of the power load, any temporal dynamics of the power load, and other similar information as understood by one of ordinary skill in the art.
  • PCB printed circuit board
  • the thermal load simulation module(s) 20 interfaces with the computer model
  • the thermal load simulation module(s) 20 generates the thermal load steering parameters 46 and stores them in, for example, the thermal load steering scenarios table 24, which is provided to the PCD 100.
  • the PCD 100 generally comprises thermal load steering module(s) 26, thermal policy manager module(s) 101, a monitor module 114, a central processing unit 110, one or more thermal sensors 157A located on the integrated circuit 102, and one or more thermal sensors 157B located off the integrated circuit 102.
  • the thermal load steering module(s) 26 generally comprises the logic for monitoring the operations to be performed by the PCD 100 and determining whether thermal load steering should be performed.
  • the thermal load steering module(s) 26 accesses the thermal load steering scenarios table 24, interprets the thermal load steering parameter(s) 46, and schedules the workload in such a way to steer the processing load associated with the thermal load to underutilized, lower temperature or otherwise available processing capacity.
  • the thermal load steering module(s) 26 accesses the thermal load steering scenarios table 24, interprets the thermal load steering parameter(s) 46, and schedules the workload in such a way to steer the processing load associated with the thermal load to underutilized, lower temperature or otherwise available processing capacity.
  • such embodiments that leverage thermal load steering parameter(s) to reallocate a processing load to open processing capacity may realize the benefit of lower temperatures resulting from the reallocation.
  • thermal load steering parameter(s) 46 in some embodiments may further include provision of instructions to the thermal load steering module(s) 26 for steering a thermal load to a location near a certain thermal sensor or sensors 157. That is, it is envisioned that some embodiments may generate thermal load steering parameter(s) for the purpose of steering a processing load, which correlates to a given thermal load signature, to available processing capacity nearer a sensor 157.
  • thermal load steering parameter(s) for the purpose of steering a processing load to open processing near a sensor may realize more accurate temperature measurement, thus leading to more efficient reallocation of processing load.
  • an embodiment that includes a CPU 110 having main processing blocks and higher performing, specialized sub-processor blocks may have main processing blocks that represent 3 ⁇ 4 of the CPU 110 area and sub-processor blocks that represent the remaining 1 ⁇ 4 of the CPU area.
  • the main processor blocks may have an associated power density (“PD") that dissipates 1 ⁇ 2 the total power of the overall CPU 110 while the sub-processor blocks also have an associated power density that dissipates 1 ⁇ 2 the total power.
  • PD power density
  • thermal load steering parameter(s) to reallocate processing loads from one component to another, such as, for example, from a sub-processor block of CPU 110 to a main processor block of CPU 110, may realize the benefit of lower thermal energy dissipation for a relatively minor tradeoff of processing performance or Quality of Service ("QoS").
  • QoS Quality of Service
  • the main processor blocks may process the load more slowly, thus translating to a lower QoS, but dissipate less thermal energy than the sub-processors.
  • thermal load steering module(s) 26 may communicate with (or be integrated with one or more of) the thermal policy manager module(s) 101 , the monitor module 114, the CPU 110, or any other hardware or software components of the PCD 100.
  • FIG. 2 illustrates a method 28 implemented by the simulation computer 10.
  • the method 28 may be performed during the design and development of the integrated circuit 102 and the PCD 100 so that the devices may be appropriately configured to support the thermal load steering features.
  • the method 28 may be performed after the PCD 100 has been manufactured, in which case the thermal load steering feature may be enabled through appropriate software upgrades.
  • a thermal load condition comprises a spatial thermal load distribution or "hotspot" 48 produced on the integrated circuit 102 under a simulated workload 44.
  • the hotspot 48 as illustrated in FIG. 4A may be located on a first core 222 (FIG. 7A).
  • measuring thermal energy i.e.
  • a temperature sensor 157A at a point that is some distance from the hotspot 48 may be difficult due to the thermal wave moving across the surface of an object (i.e. - the computer chip or printed circuit board).
  • the position of sensor 157A which is at some distance relative to the hotspot 48, may not have the same temperature as the hotspot 48 itself.
  • placement of the sensors proximate to components known to dissipate significant amounts of thermal energy, such as within 5 °C of the likely hotspot center may provide data useful for more efficient reallocation of processing loads.
  • the simulation computer 10 may determine that the processing load associated with hotspot 48, or a portion of the processing load associated with hotspot 48, should be reallocated to an underutilized or available processing area. Based on the computer model 22, the simulation computer 10 may determine that at least a portion of the simulated workload 44 may be handled by a second core 224 instead of the first core 222, thereby mitigating potential thermal energy dissipation by spreading the processing load across the two cores 222, 224.
  • the appropriate thermal load steering parameters 46 are generated for moving the processing load associated with hotspot 48 to a location on the second core 224 (see FIG. 4B).
  • the simulation computer 10 generates and stores the thermal load steering scenarios table 24 in the memory 14.
  • the thermal load steering scenarios table 24 may comprise a scenario 40 for each simulated thermal load condition with corresponding data such as, but not limited to, thermal load condition data 42, simulated workload data 44, and the thermal load steering parameter(s) 46.
  • the load condition data, simulated workload data 44, and thermal load steering parameters(s) 46 may include, but are not limited to, separate use case breakdowns of power dissipation per power consuming (i.e.
  • the thermal load steering scenarios table 24 is provided to the PCD 100.
  • FIG. 5 illustrates an embodiment of a method 50 implemented by the PCD 100 for performing thermal load steering.
  • the thermal load steering scenarios table 24 is stored in memory in the PCD 100.
  • the thermal load steering module(s) 26 monitors scheduled workloads for the PCD 100. In an embodiment, the monitoring may be performed by interfacing with an O/S scheduler 207 (See FIGs. 7A-7B), which receives and manages requests for hardware resources on the PCD 100. By monitoring the O/S scheduler requests, the thermal load steering module(s) 26 may compare the scheduled workloads to the simulated workload data 44 to determine if it matches one of the scenarios 40 in the table 24. If the scheduled workload matches a scenario 40 (decision block 56), the corresponding thermal load steering parameter(s) 46 may be obtained from the table 24 (block 58) and used to schedule, or otherwise reallocate, the workload on the PCD 100 (block 60).
  • optional block 57 a default load steering vector may be accessed and used by the thermal load steering module 26 if the scheduled workload does not match a scenario 40.
  • optional block 57 may be skipped in which the "NO" branch is followed back to decision block 56.
  • the resulting thermal load may be mitigated by more thermally efficient allocation of processing load across the PCD 100.
  • the PCD 100 may initiate any desirable thermal management policies.
  • FIG. 6 is a functional block diagram of an exemplary, non- limiting aspect of a PCD 100 in the form of a wireless telephone for implementing methods and systems for monitoring thermal conditions and managing thermal policies.
  • PCD 100 may be configured to manage thermal load associated with graphics processing.
  • the PCD 100 includes an on-chip system 102 that includes a multi-core central processing unit ("CPU") 110 and an analog signal processor 126 that are coupled together.
  • the CPU 110 may comprise a zeroth core 222, a first core 224, and an Nth core 230 as understood by one of ordinary skill in the art.
  • DSP digital signal processor
  • the thermal policy manager module(s) 101 may be responsible for monitoring and applying thermal policies that include one or more thermal mitigation techniques that may help a PCD 100 manage thermal conditions and/or thermal loads and avoid experiencing adverse thermal conditions, such as, for example, reaching critical temperatures, while maintaining a high level of functionality.
  • FIG. 6 also shows that the PCD 100 may include a monitor module 114.
  • the monitor module 114 communicates with multiple operational sensors (e.g., thermal sensors 157) distributed throughout the on-chip system 102 and with the CPU 110 of the PCD 100 as well as with the thermal policy manager module 101.
  • the thermal policy manager module 101 may work with the monitor module 114 to identify adverse thermal conditions and apply thermal policies that include one or more thermal mitigation techniques as will be described in further detail below.
  • a touch screen display 132 external to the on-chip system 102 is coupled to the display controller 128 and the touch screen controller 130.
  • PCD 100 may further include a video encoder 134, e.g., a phase-alternating line
  • PAL PAL
  • SECAM sequentially energizer
  • NTSC national television system(s) committee
  • the video encoder 134 is coupled to the multi-core central processing unit (“CPU") 110.
  • a video amplifier 136 is coupled to the video encoder 134 and the touch screen display 132.
  • a video port 138 is coupled to the video amplifier 136.
  • a universal serial bus (“USB”) controller 140 is coupled to the CPU 110.
  • a USB port 142 is coupled to the USB controller 140.
  • a memory 112 and a subscriber identity module (SIM) card 146 may also be coupled to the CPU 110.
  • SIM subscriber identity module
  • a digital camera 148 may be coupled to the CPU 110.
  • the digital camera 148 is a charge-coupled device (“CCD”) camera or a complementary metal-oxide semiconductor (“CMOS”) camera.
  • CCD charge-coupled device
  • CMOS complementary metal-oxide semiconductor
  • a stereo audio CODEC 150 may be coupled to the analog signal processor 126.
  • an audio amplifier 152 may be coupled to the stereo audio CODEC 150.
  • a first stereo speaker 154 and a second stereo speaker 156 are coupled to the audio amplifier 152.
  • FIG. 6 shows that a microphone amplifier 158 may be also coupled to the stereo audio CODEC 150.
  • a microphone 160 may be coupled to the microphone amplifier 158.
  • a frequency modulation ("FM") radio tuner 162 may be coupled to the stereo audio CODEC 150.
  • an FM antenna 164 is coupled to the FM radio tuner 162.
  • stereo headphones 166 may be coupled to the stereo audio CODEC 150.
  • FIG. 6 further indicates that a radio frequency (“RF") transceiver 168 may be coupled to the analog signal processor 126.
  • An RF switch 170 may be coupled to the RF transceiver 168 and an RF antenna 172.
  • a keypad 174 may be coupled to the analog signal processor 126.
  • a mono headset with a microphone 176 may be coupled to the analog signal processor 126.
  • a vibrator device 178 may be coupled to the analog signal processor 126.
  • FIG. 6 also shows that a power supply 180, for example a battery, is coupled to the on-chip system 102.
  • the power supply includes a rechargeable DC battery or a DC power supply that is derived from an alternating current (“AC”) to DC transformer that is connected to an AC power source.
  • AC alternating current
  • the CPU 110 may also be coupled to one or more internal, on-chip thermal sensors 157A as well as one or more external, off-chip thermal sensors 157B.
  • the on- chip thermal sensors 157A may comprise one or more proportional to absolute temperature (“PTAT”) temperature sensors that are based on vertical PNP structure and are usually dedicated to complementary metal oxide semiconductor (“CMOS”) very large-scale integration (“VLSI”) circuits.
  • CMOS complementary metal oxide semiconductor
  • VLSI very large-scale integration
  • the off-chip thermal sensors 157B may comprise one or more thermistors.
  • the thermal sensors 157 may produce a voltage drop that is converted to digital signals with an analog-to-digital converter (“ADC”) controller 103 (See FIG. 7A).
  • ADC analog-to-digital converter
  • other types of thermal sensors 157 may be employed without departing from the scope of the invention.
  • thermal sensors 157 in addition to being controlled and monitored by an
  • ADC controller 103 may also be controlled and monitored by one or more thermal policy manager module(s) 101.
  • the thermal policy manager module(s) may comprise software which is executed by the CPU 110.
  • the thermal policy manager module(s) 101 may also be formed from hardware and/or firmware without departing from the scope of the invention.
  • the thermal policy manager module(s) 101 may be responsible for monitoring and applying thermal policies that include one or more thermal mitigation techniques that may help a PCD 100 avoid critical temperatures while maintaining a high level of functionality.
  • FIG. 1 also shows that the PCD 100 may
  • the monitor module 114 communicates with multiple operational sensors distributed throughout the on-chip system 102 and with the CPU 110 of the PCD 100 as well as with the thermal policy manager module 101.
  • the thermal policy manager module 101 may work with the monitor module to apply thermal policies that include one or more thermal mitigation techniques as will be described in further detail below.
  • the touch screen display 132, the video port 138, the USB port 142, the camera 148, the first stereo speaker 154, the second stereo speaker 156, the microphone 160, the FM antenna 164, the stereo headphones 166, the RF switch 170, the RF antenna 172, the keypad 174, the mono headset 176, the vibrator 178, thermal sensors 157B, and the power supply 180 are external to the on-chip system 322.
  • the monitor module 114 may also receive one or more indications or signals from one or more of these external devices by way of the analog signal processor 126 and the CPU 110 to aid in the real time management of the resources operable on the PCD 100.
  • one or more of the method steps described herein may be implemented by executable instructions and parameters stored in the memory 112 that form the one or more thermal policy manager module(s) 101. These instructions that form the thermal policy manager module(s) may be executed by the CPU 110, the analog signal processor 126, or another processor, in addition to the ADC controller 103 to perform the methods described herein. Further, the processors 110, 126, the memory 112, the instructions stored therein, or a combination thereof may serve as a means for performing one or more of the method steps described herein.
  • FIG. 7A is a functional block diagram illustrating an exemplary spatial arrangement of hardware for the chip 102 illustrated in FIG. 6.
  • the applications CPU 110 is positioned on the far left side region of the chip 102 while the modem CPU 168, 126 is positioned on a far right side region of the chip 102.
  • the applications CPU 110 may comprise a multi-core processor that includes a zeroth core 222, a first core 224, and an Nth core 230.
  • the applications CPU 110 may be executing a thermal policy manager module 101 A (when embodied in software) or it may include a thermal policy manager module 101 A (when embodied in hardware).
  • the application CPU 110 is further illustrated to include operating system (“O/S") module 207 and a monitor module 114. Further details about the monitor module 114 will be described below in connection with FIG. 7B.
  • O/S operating system
  • the applications CPU 110 may be coupled to one or more phase locked loops ("PLLs”) 209A, 209B, which are positioned adjacent to the applications CPU 110 and in the left side region of the chip 102. Adjacent to the PLLs 209A, 209B and below the applications CPU 110 may comprise an analog-to-digital (“ADC") controller 103 that may include its own thermal policy manager 10 IB that works in conjunction with the main thermal policy manager module 101 A of the applications CPU 110.
  • PLLs phase locked loops
  • ADC analog-to-digital
  • the thermal policy manager 10 IB of the ADC controller 103 may be responsible for monitoring and tracking multiple thermal sensors 157 that may be provided "on-chip” 102 and "off-chip” 102.
  • the on-chip or internal thermal sensors 157A may be positioned at various locations.
  • a first internal thermal sensor 157A1 may be positioned in a top center region of the chip 102 between the applications CPU 110 and the modem CPU 168,126 and adjacent to internal memory 112.
  • a second internal thermal sensor 157A2 may be positioned below the modem CPU 168, 126 on a right side region of the chip 102.
  • This second internal thermal sensor 157A2 may also be positioned between an advanced reduced instruction set computer (“RISC”) instruction set machine (“ARM”) 177 and a first graphics processor 135 A.
  • RISC advanced reduced instruction set computer
  • ARM instruction set machine
  • DAC digital-to-analog controller
  • a third internal thermal sensor 157A3 may be positioned between a second graphics processor 135B and a third graphics processor 135C in a far right region of the chip 102.
  • a fourth internal thermal sensor 157A4 may be positioned in a far right region of the chip 102 and beneath a fourth graphics processor 135D.
  • a fifth internal thermal sensor 157A5 may be positioned in a far left region of the chip 102 and adjacent to the PLLs 209 and ADC controller 103.
  • One or more external thermal sensors 157B may also be coupled to the ADC controller 103.
  • the first external thermal sensor 157B1 may be positioned off-chip and adjacent to a top right quadrant of the chip 102 that may include the modem CPU 168, 126, the ARM 177, and DAC 173.
  • a second external thermal sensor 157B2 may be positioned off-chip and adjacent to a lower right quadrant of the chip 102 that may include the third and fourth graphics processors 135C, 135D.
  • FIG. 7A illustrates yet one exemplary spatial arrangement and how the main thermal policy manager module 101 A and ADC controller 103 with its thermal policy manager 10 IB may manage thermal states that are a function of the exemplary spatial arrangement illustrated in FIG. 7A.
  • FIG. 7B is a schematic diagram illustrating an exemplary software architecture of the PCD 100 of FIG. 6 and FIG. 7A for supporting dynamic voltage and frequency scaling ("DVFS") algorithms.
  • DVFS algorithms may form or be part of at least one thermal mitigation technique that may be triggered by the thermal policy manager 101 when certain thermal conditions are met as will be described in detail below.
  • the CPU or digital signal processor 110 is coupled to the memory 112 via a bus 211.
  • the CPU 110 is a multiple-core processor having N core processors. That is, the CPU 110 includes a first core 222, a second core 224, and an N th core 230. As is known to one of ordinary skill in the art, each of the first core 222, the second core 224 and the ⁇ ⁇ core 230 are available for supporting a dedicated application or program. Alternatively, one or more applications or programs can be distributed for processing across two or more of the available cores.
  • the CPU 110 may receive commands from the thermal policy manager module(s) 101 that may comprise software and/or hardware. If embodied as software, the thermal policy manager module 101 comprises instructions that are executed by the CPU 110 that issues commands to other application programs being executed by the CPU 110 and other processors.
  • the first core 222, the second core 224 through to the Nth core 230 of the CPU 110 may be integrated on a single integrated circuit die, or they may be integrated or coupled on separate dies in a multiple-circuit package.
  • Designers may couple the first core 222, the second core 224 through to the ⁇ ⁇ core 230 via one or more shared caches and they may implement message or instruction passing via network topologies such as bus, ring, mesh and crossbar topologies.
  • the RF transceiver 168 is implemented via digital circuit elements and includes at least one processor such as the core processor 210 (labeled "Core"). In this digital implementation, the RF transceiver 168 is coupled to the memory 112 via bus 213.
  • Each of the bus 211 and the bus 213 may include multiple communication paths via one or more wired or wireless connections, as is known in the art.
  • the bus 211 and the bus 213 may have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, to enable
  • bus 211 and the bus 213 may include address, control, and/or data connections to enable appropriate communications among the
  • startup logic 250 management logic 260
  • dynamic voltage and frequency scaling (“DVFS”) interface logic 270 applications in application store 280 and portions of the file system 290 may be stored on any computer-readable medium for use by or in connection with any computer-related system or method.
  • DVFS dynamic voltage and frequency scaling
  • DVFS may be designed to take advantage of DVFS by allowing the clock frequency of each processor to be adjusted with a corresponding adjustment in voltage. Reducing clock frequency alone is not useful, since any power savings is offset by an increase in execution time, resulting in no net reduction in the total energy consumed. However, a reduction in operating voltage results in a proportional savings in power consumed.
  • One main issue for DVFS enabled processors 110, 126 is how to control the balance between performance and power savings.
  • a computer-readable medium is an electronic, magnetic, optical, or other physical device or means that can contain or store a computer program and data for use by or in connection with a computer-related system or method.
  • the various logic elements and data stores may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
  • a "computer- readable medium” can be any means that can store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • the computer-readable medium can be, for example but not limited to, an
  • the computer-readable medium includes the following: an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random-access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc readonly memory (CDROM) (optical).
  • an electrical connection having one or more wires
  • a portable computer diskette magnetic
  • RAM random-access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • EPROM erasable programmable read-only memory
  • Flash memory electrostatic fiber
  • CDROM portable compact disc readonly memory
  • the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, for instance via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
  • the various logic may be implemented with any or a combination of the following technologies, which are each well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc.
  • ASIC application specific integrated circuit
  • PGA programmable gate array
  • FPGA field programmable gate array
  • the memory 112 is a non-volatile data storage device such as a flash memory or a solid-state memory device. Although depicted as a single device, the memory 112 may be a distributed memory device with separate data stores coupled to the digital signal processor and or the core 210 (or additional processor cores) in the RF transceiver 168.
  • the startup logic 250 includes one or more executable instructions for
  • a select program can be found in the program store 296 of the embedded file system 290 and is defined by a specific combination of a performance scaling algorithm 297 and a set of parameters 298.
  • the select program when executed by one or more of the core processors in the CPU 110 and the core 210 in the RF transceiver 168, may operate in accordance with one or more signals provided by the monitor module 114 in combination with control signals provided by the one or more thermal policy manager module(s) 101 to scale the performance of the respective processor core.
  • the monitor module 114 may provide one or more indicators of events, processes, applications, resource status conditions, elapsed time, as well as temperature as received from the thermal policy manager module 101.
  • the management logic 260 includes one or more executable instructions for terminating an operative performance scaling program on one or more of the respective processor cores, as well as selectively identifying, loading, and executing a more suitable replacement program for managing or controlling the performance of one or more of the available cores.
  • the management logic 260 is arranged to perform these functions at run time or while the PCD 100 is powered and in use by an operator of the device.
  • a replacement program can be found in the program store 296 of the embedded file system 290 and is defined by a specific combination of a performance scaling algorithm 297 and a set of parameters 298.
  • the replacement program when executed by one or more of the core processors in the digital signal processor or the core 210 in the RF transceiver 168, may operate in accordance with one or more signals provided by the monitor module 1 14 or one or more signals provided on the respective control inputs of the various processor cores to scale the performance of the respective processor core.
  • the monitor module 114 may provide one or more indicators of events, processes, applications, resource status conditions, elapsed time, temperature, etc in response to control signals originating from the thermal policy manager 101.
  • the DVFS interface logic or interface logic 270 includes one or more executable instructions for presenting, managing and interacting with external inputs to observe, configure, or otherwise update information stored in the embedded file system 290.
  • the interface logic 270 may operate in conjunction with manufacturer inputs received via the USB port 142. These inputs may include one or more programs to be deleted from or added to the program store 296. Alternatively, the inputs may include edits or changes to one or more of the programs in the program store 296.
  • the inputs may identify one or more changes to, or entire replacements of one or both of the startup logic 250 and the management logic 260.
  • the inputs may include a change to the management logic 260 that instructs the PCD 100 to suspend all performance scaling in the RF transceiver 168 when the received signal power falls below an identified threshold.
  • the inputs may include a change to the management logic 260 that instructs the PCD 100 to apply a desired program when the video codec 134 is active.
  • the interface logic 270 enables a manufacturer to controllably configure and adjust an end user's experience under defined operating conditions on the PCD 100.
  • the memory 112 is a flash memory
  • one or more of the startup logic 250, the management logic 260, the interface logic 270, the application programs in the application store 280 or information in the embedded file system 290 can be edited, replaced, or otherwise modified.
  • the interface logic 270 may permit an end user or operator of the PCD 100 to search, locate, modify or replace the startup logic 250, the management logic 260, applications in the application store 280 and information in the embedded file system 290.
  • the operator may use the resulting interface to make changes that will be implemented upon the next startup of the PCD 100. Alternatively, the operator may use the resulting interface to make changes that are implemented during run time.
  • the embedded file system 290 includes a hierarchically arranged DVFS store
  • the file system 290 may include a reserved section of its total file system capacity for the storage of information for the configuration and management of the various parameters 298 and performance scaling algorithms 297 used by the PCD 100.
  • the DVFS store 292 includes a core store 294, which includes a program store 296, which includes one or more DVFS programs. Each program is defined as a combination of a respective performance scaling algorithm and a set of parameters associated with the specific algorithm.
  • a particular member of a set of files may be located and identified by the path of ⁇ startup ⁇ coreO ⁇ algorithm ⁇ parameter set.
  • a program is identified by the algorithm in combination with the contents of information stored in the parameter set.
  • a conventional DVFS algorithm known as "classic” may be identified to manage performance scaling on coreO 222 in accordance with the parameters sample rate, samples to increase, and samples to decrease as follows: ⁇ startup ⁇ coreO ⁇ classic ⁇ SampleRate, with a value of 100, where the sample rate is in MHz; ⁇ startup ⁇ coreO ⁇ classic ⁇ SamplesToIncrease, with a value of 2, where the samples to increase is an integer; and
  • the algorithm is defined by a periodic sampling of the CPU idle percentage and operates in accordance with a low threshold (% idle) and a high threshold (% idle). If a samples-to-increase threshold comparator indicates for two consecutive samples that performance should be increased, the DVFS algorithm increases performance in accordance with a predetermined clock level adjustment. Conversely, if a samples-to-decrease threshold comparator indicates for 1 consecutive sample that performance should be decreased, the DVFS algorithm decreases performance in accordance with the predetermined clock level (i.e., frequency) adjustment. As explained above, processor or core operating voltage may be changed together with changes in the clock frequency.
  • predetermined clock level i.e., frequency
  • the DVFS store 292 may be arranged such that the search path starts from the most specific with respect to its application (i.e., the processor core, algorithm, and parameter value) progresses to the least specific with respect to application.
  • parameters are defined in the directories /coreO, /coreAll and /default in association with the "classic" performance scaling algorithm.
  • the path ⁇ coreO ⁇ classic ⁇ SampleRate - applies only to the classic algorithm operating on coreO. This most specific application will override all others.
  • the path ⁇ coreAll ⁇ classic ⁇ SampleRate - applies to any processor core running the classic algorithm. This application is not as specific as the example path above but is more specific than ⁇ default ⁇ classic ⁇ SampleRate - which applies to any processor core running the classic algorithm.
  • This default application is the least specific and is used only if no other suitable path exists in the DVFS store 292.
  • the first parameter found will be the one used.
  • the ⁇ default location will always have a valid parameter file.
  • FIG. 7C is a first table 267 listing exemplary frequency values for two different
  • each core of the multi-core CPU 110 may be assigned specific maximum clock frequency values depending upon the current DVFS algorithm being executed.
  • Core 0 may be assigned a maximum clock frequency of 600 MHz
  • Core 1 may be assigned a maximum clock frequency of 650 MHz
  • the Nth Core may be assigned a maximum clock frequency of 720 MHz.
  • Core 0 may be assigned a maximum clock frequency of 610 MHz, while Core 1 is assigned a maximum clock frequency of 660 MHz, and the Nth core may be assigned a maximum clock frequency of 700 MHz. These limits on clock frequency may be selected by the thermal policy manager 101 depending upon the current thermal state of the PCD 100.
  • FIG. 7D is a second table 277 listing exemplary frequency and voltage pairs for two DVFS algorithms.
  • Core 0 may be assigned a maximum clock frequency of 600 MHz while its maximum voltage may be limited to 1.3 mV.
  • Core 1 may be assigned a maximum clock frequency of 500 MHz and a corresponding maximum voltage of 2.0 mV.
  • Core N may be assigned a maximum clock frequency of 550 MHz and a corresponding maximum voltage of 1.8 mV.
  • Core 0 may be assigned a maximum clock frequency of 550 MHz while the maximum voltage is assigned the value of 1.0 mV.
  • Core 1 may be assigned a maximum clock frequency of 600 MHz and the corresponding maximum voltage of 1.5 mV. And lastly, Core N may be assigned a maximum clock frequency of 550 MHz and a corresponding maximum voltage of 1.9 mV.
  • the thermal policy manager 101 may select the various pairs of frequency and voltages enumerated in table 277 depending upon the current thermal state of the PCD 100.
  • FIG. 8 is an exemplary state diagram 300 that illustrates various thermal policy states 305, 310, 315, and 320 that are tracked by the thermal policy manager 101.
  • the first policy state 305 may comprise a "normal" state in which the thermal policy manager 101 only monitors thermal sensors 157 in a routine or ordinary fashion.
  • the PCD 100 is usually not in any danger or risk of reaching critical temperatures that may cause failure of any of the hardware and/or software components.
  • the thermal sensors 157 may be detecting or tracking temperatures that are at 50°C or below.
  • other temperature ranges may be established for the first and normal state 305 without departing from the scope of the invention.
  • the second policy state 310 may comprise a "quality of service” or "QoS" state in which the thermal policy manager 101 may increase the frequency in which thermal sensors 157 are polled or in which the thermal sensors 157 send their temperature status reports to the thermal policy manager 101.
  • This exemplary second state 310 may be reached or entered into by the thermal policy manager 101 when a change of temperature has been detected in the first, normal state 305.
  • the threshold or magnitude of the change in temperature (delta T) which triggers this QoS state 310 may be adjusted or tailored according to a particular PCD 100.
  • a PCD 100 may be operating in the first normal state 305, depending upon the magnitude of the change in temperature that is detected by one or more thermal sensors, the PCD 100 may leave the first normal state 305 and enter into the second QoS state 310 as tracked by the thermal policy manager 101.
  • a PCD 100 may have a first maximum temperature reading from a given thermal sensor 157 of approximately 40°C. And a second reading from the same thermal sensor 157 may show a change in temperature of only 5°C which takes the maximum temperature being detected to 45°C.
  • the maximum temperature being detected may be below an established threshold of 50°C for the first, normal state 305, the change in temperature by 5°C may be significant enough for the thermal policy manager 101 to change the state to the second, QoS state 310.
  • the thermal policy manager 101 may
  • the thermal policy manager 101 is designed to implement or request thermal mitigation techniques that may be barely perceivable by an operator and which may degrade a quality of service provided by the PCD 100 in a minimal fashion.
  • the temperature range for this second, QoS thermal state 310 may comprise a range between about 50°C to about 80°C.
  • One of ordinary skill in the art will recognize that other temperature ranges may be established for the second QoS state 305 and are within the scope of the invention.
  • the second, QoS state 310 may be triggered based on the magnitude and/or location of the change in temperature and are not necessarily limited to the endpoints of a selected temperature range. Further details about this second, QoS thermal state 310 will be described below in connection with FIG. 9.
  • the third thermal state 315 may comprise a "severe" state in which the thermal policy manager 101 continues to monitor and/or receives interrupts from thermal sensors 157 while requesting and/or applying more aggressive thermal mitigation techniques relative to the second, QoS state 310 described above. This means that in this state the thermal policy manager 101 is less concerned about quality of service from the perspective of the operator. In this thermal state, the thermal policy manager 101 is more concerned about mitigating or reducing thermal load in order to decrease temperature of the PCD 100. In this third thermal state 315, a PCD 100 may have degradations in performance that are readily perceived or observed by an operator. The third, severe thermal state 315 and its corresponding thermal mitigation techniques applied or triggered by the thermal policy manager 101 will be described in further detail below in connection with FIG.
  • the temperature range for this third, severe thermal state 310 may comprise a range between about 80°C to about 100°C.
  • this third and severe thermal state 315 may be initiated based upon the change in temperature detected by one or more thermal sensors 157 and not necessarily limited to a temperature range established or mapped for this third thermal state 315. For example, as the arrows in this diagram illustrate, each thermal state may be initiated in sequence or they can be initiated out of sequence depending upon the magnitude of the change in temperature (delta T) that may be detected.
  • the PCD 100 may leave the first and normal thermal state 305 and enter into or initiate the third and severe thermal state 315 based on a change in temperature that is detected by one or more thermal sensors 157, and vice versa.
  • the PCD 100 may be in the second or QoS thermal state 310 and enter into or initiate the fourth or critical state 320 based on a change in temperature that is detected by one or more thermal sensors 157, and vice versa.
  • the thermal policy manager 101 is applying or triggering as many and as sizable thermal mitigation techniques as possible in order to avoid reaching one or more critical temperatures that may cause permanent damage to the electronics contained within the PCD 100.
  • This fourth and critical thermal state 320 may be similar to conventional
  • the fourth thermal state 320 may comprise a "critical" state in which the thermal policy manager 101 applies or triggers the shutting down of non-essential hardware and/or software.
  • the temperature range for this fourth thermal state may include those of about 100°C and above.
  • the fourth and critical thermal state 320 will be described in further detail below in connection with FIG. 9.
  • the thermal policy management system is not limited to the four thermal states
  • thermal states may be provided without departing from the scope of the invention. That is, one of ordinary skill in the art recognizes that additional thermal states may improve functionality and operation of a particular PCD 100 while in other situations fewer thermal states may be preferred for a particular PCD 100 that has its own unique hardware and/or software.
  • FIG. 9 is a diagram illustrating exemplary thermal mitigation techniques that may be applied or ordered by the thermal policy manager 101 and are dependent upon a particular thermal state of a PCD 100. It should be appreciated that the thermal mitigation techniques described herein may be applied to manage thermal loads associated with any type of processing, but may be particularly useful in situations involving graphics processing due to inherent power demands, system requirements, and importance to the overall user experience of the PCD 100.
  • the first thermal state 305 may comprise a "normal" state in which the thermal policy manager 101 being executed by the CPU 110 and partially by the ADC controller 103 may monitor, poll, or receive one or more status reports on temperature from one or more thermal sensors 157.
  • a PCD 100 may not be in any danger or risk of reaching a critical temperature that may harm one or more software and/or hardware components within the PCD 100.
  • the thermal policy manager 101 is not applying or has not requested any initiation of thermal mitigation techniques such that the PCD 100 is operating at its fullest potential and highest performance without regard to thermal loading.
  • the temperature range for this first thermal state 305 may include those of 50°C and below.
  • the thermal policy manager 101 may reside in the ADC controller 103 while the main thermal policy manager 101 for all other states may reside or be executed by the CPU 110. In an alternate exemplary embodiment, the thermal policy manager 101 may reside only in the CPU 110.
  • the thermal policy manager 101 may begin more rapid monitoring, polling, and/or receiving of interrupts (relative to the first thermal state 305) from thermal sensors 157 regarding current temperature of the PCD 100.
  • the thermal policy manager 101 may initiate or request the monitor module 114 and/or operating system ("O/S") module 207 of FIG. 7A to start applying thermal mitigation techniques but with the objective to maintain high-performance with little or no perception in degradations to the quality of service as perceived by the operator of the PCD 100.
  • O/S operating system
  • the thermal policy manager 101 may request the monitor 114 and/or the O/S module 207 to initiate thermal mitigation techniques such as, but not limited to, (1) load scaling and/or (2) load dynamic scaling; (3) spatial load shifting; and (4) process load reallocation.
  • Load scaling may comprise adjusting or "scaling" the maximum clock frequency allowed in DVFS algorithm, such as the values provided in the first table 267 of FIG. 7C. Such an adjustment may limit the maximum heat dissipation.
  • This thermal load mitigation technique may also involve adjusting the voltage to match the standard DVFS table used for a particular and unique PCD 100.
  • the thermal load mitigation technique of load dynamic scaling may comprise the scaling of one and/or all/ of the N application processor cores 222, 224, and 230.
  • This thermal load mitigation technique may comprise establishing the max clock frequency allowed for the DVFS algorithm of a particular core 222, 224, or 230.
  • the DVFS algorithm will use a table of voltage/frequency pairs, such as the second table 277 illustrated in FIG. 7D, to scale processing capability.
  • One such way includes limiting the number of millions of instructions per second ("MIPS") by limiting the max frequency allowed.
  • MIPS millions of instructions per second
  • the thermal policy manager 101 is effectively limiting the power consumption of the core(s) 222, 224, and 230 and limiting their capability (MIPS) that is available.
  • the thermal policy manager 101 may choose to limit N cores 222, 224, 230 together, or it can select and choose which cores 222, 224, 230 get scaled back while allowing other cores 222, 224, 230 to operate in an unconstrained manner.
  • the thermal policy manager 101, monitor module 114, and/or O/S module 207 may make their decisions on which cores 222, 224, 230 to control based on data received from thermal sensors 157 or software application requirements based, and/or best effort prediction.
  • the temperature range for this second thermal state may include those of about 50° C to about 80°C.
  • the thermal load mitigation technique of spatial load shifting comprises the activation and deactivation of cores within a multi-core processor system. If N multiple cores exist, each core may be loaded up with work or its performance maximized using up to N-l cores and then as a thermal sensor 157 indicates a heating problem, the location of an inactive core functioning as a cooling device may be shifted. Each core may effectively be cooled by letting it idle in a predetermined pattern or in a pattern dictated by thermal measurements. A 'hole' is effectively moved in MIPS around the cores to cool them are in the course of several seconds. In this way, several GHz of processing power may be made available to a PCD 100, while still cooling the silicon die by moving the load around. Further details of spatial load shifting will be described below in connection with FIGs. 13 A.
  • PCD 100 may have temperature sensors 157 in close proximity to individual cores or groups of cores. Based on temperature readings from sensors 157, drivers executed on one or more of the cores themselves may be leveraged to cause a process load reallocation from a "hot” core to a "cool,” or otherwise less utilized, core.
  • embodiments of various thermal mitigation techniques may be implemented in real-time, or near real-time, as the thermal policy manager module(s) 101 may be operable to react to temperature readings which fluctuate with processing loads.
  • the thermal policy manager module(s) 101 may be operable to react to temperature readings which fluctuate with processing loads.
  • predefined thermal steering scenarios 24 may not be required. That is, some embodiments may utilize algorithms that, based on real-time temperature inputs and workload data, can generate instructions for efficient reallocation or spatial shifting of processing load.
  • process loads may be reallocated within a given core.
  • process loads requiring high computational power such as, but not limited to, gaming applications having excessive graphical processing requirements, may normally be scheduled for processing at a sub-core level to benefit from the improved computational capacity of the sub-core.
  • An overloaded process queue at a sub-core may generate excessive thermal energy that could be detrimental to the CPU 110 or other components comprised within the PCD 100.
  • the thermal energy load may be mitigated by reallocating within the given core (as opposed to between cores) all or part of the process load from the high density sub-processor block to the lower power density main process block.
  • the thermal policy manager 101 may start continuous monitoring, polling, or receiving interrupts from thermal sensors 157 so that temperature is sensed more continuously / frequently compared to the second lower thermal state 310.
  • the thermal policy manager 101 may apply or request that the monitor module 114 and/or O/S module 207 apply more aggressive thermal mitigation techniques and/or additional thermal mitigation techniques (relative to the second thermal state 310) with probable perceivable degradation of performance observed by an operator of the PCD 100.
  • the thermal policy manager 101 may cause reduction in power to one or more hardware devices like amplifiers, processors, etc.
  • the thermal policy manager 101 may also shift workloads among different hardware devices in a spatial manner in order to bring active devices off-line and to bring inactive devices on-line.
  • the thermal mitigation techniques of this third and severe thermal state 315 may be the same as those described above with respect to the second, quality of service thermal state 310. However, these same thermal mitigation techniques may be applied in a more aggressive manner. For example, in reallocation of process loads, the thermal policy manager 101 may request that a larger percentage of process loads are reallocated from the high power density sub-processor blocks to the main processor blocks of the various cores, as compared to the second thermal state 310.
  • the thermal policy manager 101 may request that active process loads are completely reallocated from the high power density sub-processor blocks to the main processor blocks of the various cores, effectively taking the high thermal energy generating sub-processor blocks offline. These process load allocations may result in less than desirable processing performance that what is recommended for supporting a particular application program.
  • the thermal policy manager 101 may start shutting down or requesting the monitor 114 and/or O/S module 207 to start shutting down all nonessential hardware and/or software modules.
  • nonessential hardware and/or software modules may be different for each type of particular PCD 100.
  • all nonessential hardware and/or software modules may include all of those outside of an emergency 911 telephone call function and global positioning satellite ("GPS") functions.
  • GPS global positioning satellite
  • the thermal policy manager 101 in this fourth, critical thermal state 320 may cause the shutdown of hardware and/or software modules that are outside of emergency 911 telephone calls and GPS functions.
  • the thermal policy manager 101 may shut down modules in sequence and/or in parallel depending upon the critical temperatures being monitored by the thermal sensors 157 and the change in temperature being observed by the thermal policy manager 101.
  • the temperature range for this fourth thermal state 320 may include those of about 100°C and above.
  • FIG. 10 is a diagram illustrating an exemplary graph 500 of temperature versus time and corresponding thermal policy states 305, 310, 315, and 320.
  • the thermal policy manager 101 may receive a first interrupt temperature reading of 40°C from one or more thermal sensors 157.
  • the thermal policy manager 101 may remain in the first or normal thermal state 305.
  • the manager 101 may receive a second interrupt temperature reading of 50°C. Though 50°C may be within the selected temperature range for the first thermal state 305, if the change in temperature from the last temperature reading was significant, such as a large temperature change within a short period of time (like a 3°C change within five seconds), then such a change or jump in temperature may trigger the thermal policy manager 101 to leave the normal thermal state 305 and initiate the second, QoS thermal state 310.
  • the thermal policy manager 101 may have requested or activated one or more thermal mitigation techniques in order to lower the temperature of the PCD 100.
  • the thermal policy manager 101 may change the thermal state of the PCD 100 from the second state 310 to the first and normal state 305.
  • the thermal policy manager 101 may observe that the temperature trend is moving in an upward fashion or, in other words, the temperature line 505 may have a positive slope or change in delta T.
  • the thermal policy manager 101 may change the thermal state of the PCD 100 in view of this data from the first thermal state 305 to the second, QoS thermal state 310.
  • the thermal policy manager 101 may request or it may activate one or more thermal mitigation techniques that should not significantly impact the quality of service provided by the PCD 100.
  • the second thermal state 310 may include a temperature range of about 50°C to about 80°C.
  • the thermal policy manager 101 may initiate a change of thermal state from the second, QoS thermal state 310 to the third and severe thermal state 315.
  • the temperature range for this first thermal state may include a range of about 80°C to about 100°C.
  • the thermal policy manager 101 may be requesting or activating a plurality of thermal mitigation techniques that may impact the quality of service and performance of the PCD 100.
  • the segment of the temperature line 505 between the fifth point 515 and sixth point 518 reflects that the third and severe thermal state 310 has been unsuccessful in mitigating the temperature rise within the PCD 100. Therefore, at the sixth point 518 which may have a magnitude of approximately 100°C, the thermal policy manager 101 may enter into the fourth and critical state 320. In this fourth and critical state 320, the thermal policy manager 101 may activate or request that certain hardware and/or software components be shut down in order to alleviate the current thermal load. As noted previously, the thermal policy manager 101 may cause any hardware and/or software component outside of emergency 911 call functions and GPS functions to be shut down while in this fourth thermal state 320.
  • the segment of the line 505 between the sixth point 518 and seventh point 521 reflects that the critical thermal state 320 and severe thermal state 315 were successful in lowering the temperature of the PCD 100.
  • one or more thermal states may be jumped or skipped depending upon the temperature measured by the thermal sensors 157 and observed by the thermal policy manager 101.
  • FIGs. 11 A & 1 IB are logical flowcharts illustrating a method 600 for managing one or more thermal policies of a PCD 100.
  • Method 600A of FIG. 11A starts with first block 605 in which the thermal policy manager 101 may monitor temperature with internal and external thermal sensors 157 while in a first thermal state 305.
  • This first block 605 generally corresponds with the first thermal state 305 illustrated in FIGs. 8 & 9.
  • the thermal policy manager 101 may monitor, actively poll, and/or receive interrupts from one or more thermal sensors 157. In this particular thermal state, the thermal policy manager 101 does not apply any thermal mitigation techniques.
  • the PCD 100 may perform at its optimal level without regard to any thermal loading conditions in this first thermal state.
  • the thermal policy manager 101 may determine if a temperature change (delta T) has been detected by one or more thermal sensors 157. If the inquiry to decision block 610 is negative, then the "NO" branch is followed back to block 605. If the inquiry to decision block 610 is positive, then the "YES” branch is followed to block 615 in which the thermal policy manager 101 may increase the frequency of the monitoring of the thermal sensors 157. In block 615, the thermal policy manager may actively poll the thermal sensors 157 more frequently or it may request the thermal sensors 157 to send more frequent interrupts that provide temperature data. This increased monitoring of thermal sensors 157 may occur in the first or normal state 305 and it may also occur in the second or quality of service thermal state 310.
  • a temperature change delta T
  • the thermal policy manager 101 may determine if the next thermal state has been reached or achieved by the PCD 100. In this decision block 620, the thermal policy manager 101 may be determining if the temperature range assigned to the second thermal state 310 has been achieved. Alternatively, the thermal policy manager in this decision block 620 may be determining if a significant change in temperature (delta T) has occurred since a last reading.
  • delta T a significant change in temperature
  • Routine or subroutine 625 may comprise a second thermal state 310 also referred to as the QoS state 310 in which thermal policy manager 101 may apply or request one or more thermal mitigation techniques described above in connection with FIG. 9.
  • the thermal policy manager 101 may request the monitor 114 and/or the O/S module 207 to initiate thermal mitigation techniques such as, but not limited to, (1) load scaling and/or (2) load dynamic scaling and/or (3) spatial load shifting and/or (4) process load reallocation as described above.
  • the thermal policy manager 101 may
  • the thermal policy manager 101 may determine if the
  • the thermal policy manager 101 may determine if the PCD 100 has entered into the third or severe thermal state 315 by determining if a significant change in temperature (delta T) has occurred.
  • the thermal policy manager 101 has determined that the PCD 100 has entered into the third or severe thermal state.
  • the thermal policy manager 101 may then activate or request that one or more thermal mitigation techniques be applied.
  • the thermal policy manager 101 in this third or severe thermal state 315 may start continuous monitoring, polling, or receiving interrupts from thermal sensors 157 so that temperature is sensed more continuously / frequently compared to the second lower thermal state 310.
  • the thermal policy manager 101 may apply or request that the monitor module 114 and/or O/S module 207 apply more aggressive thermal mitigation techniques and/or additional thermal mitigation techniques (relative to the second thermal state 310) with probable perceivable degradation of performance observed by an operator of the PCD 100.
  • the thermal policy manager 101 may cause reduction in power to one or more hardware devices like amplifiers, processors, etc.
  • the thermal policy manager 101 may also shift workloads among different hardware devices in a spatial manner in order to bring active devices off-line and to bring inactive devices on-line. Further, the thermal policy manager may increase the percentage of process loads reallocated from a high performance sub-processor block to the main processor blocks.
  • the thermal mitigation techniques of this third and severe thermal state 315 may be the same as those described above with respect to the second, quality of service thermal state 310. As explained above, however, these same thermal mitigation techniques may be applied in a more aggressive manner.
  • the thermal policy manager 101 may determine that the one or more thermal mitigation techniques applied in subroutine 640 were successful to prevent escalation of temperature for the PCD 100. If the inquiry to decision block 645 is negative, then the "NO" branch is followed to step 655 of FIG. 1 IB. If the inquiry to decision block 645 is positive, then the "YES" branch is followed to step 650 in which the thermal policy manager 101 determines the current thermal state of the PCD 100 based on temperature readings provided by the one or more thermal sensors 157.
  • FIG. 1 IB is a continuation flow chart relative to the flowchart illustrated in FIG.
  • the method 600B of FIG. 1 IB starts with decision block 655 in which the thermal policy manager 101 may determine if the PCD 100 has entered into the fourth or critical thermal state 320 based on the temperature being detected by one or more thermal sensors 157. If the inquiry to decision block 655 is negative, then the "NO" branch is followed to step 660 in which the thermal policy manager 101 returns the PCD 100 to the third or severe thermal state 315 and the process returns to block 635 of FIG. 11 A.
  • the thermal policy manager 101 activates or requests that one or more critical thermal mitigation techniques be implemented.
  • the thermal policy manager 101 in this fourth, critical thermal state 320 may cause the shutdown of hardware and/or software modules that are outside of emergency 911 telephone calls and GPS functions.
  • the thermal policy manager 101 may shut down modules in sequence and/or in parallel depending upon the critical temperatures being monitored by the thermal sensors 157 and the change in temperature being observed by the thermal policy manager 101.
  • the thermal policy manager 101 may
  • routine or submethod 665 determines that the thermal mitigation techniques applied in routine or submethod 665 were successful to prevent any escalation of temperature of the PCD 100 as detected by the thermal sensors 157. If the inquiry to decision block 670 is negative, then the "NO" branch is followed back to routine or submethod 665.
  • step 675 the thermal policy manager 101 determines the current thermal state of the PCD 100 based on temperature readings supplied by one or more thermal sensors 157. Once the temperature readings are assessed by the thermal policy manager 101, the thermal policy manager 101 initiates the thermal state corresponding to the temperature ranges detected by the thermal sensors 157.
  • FIG. 12 is a logical flowchart illustrating sub-method or subroutines 625, 640, and 665 for applying process load reallocation thermal mitigation techniques.
  • Block 705 is the first step in the submethod or subroutine for applying process load reallocation thermal mitigation techniques.
  • the thermal policy manager 101 may determine the current thermal state based on temperature readings provided by thermal sensors 157 most proximate to the various CPU and/or GPU cores. Once the current thermal state is determined by the thermal policy manager 101, in block 710 the thermal policy manager 101 may then review the current process load allocations for the various cores associated with the temperature readings.
  • the thermal policy manager 101 may review the current workloads of one or more available, or otherwise underutilized, hardware and/or software modules.
  • the thermal policy manager 101 may reallocate or issue commands to reallocate the current workloads among the various cores, in order to reduce workload or to shift the workload.
  • the proportion of processing load reallocation, the particular portion of process load which is reallocated and the processing location to which load is reallocated, may be accomplished according to the current thermal state determined by the thermal policy manager 101.
  • thermal energy generation can be mitigated.
  • the thermal policy manager 101 may initiate or request the monitor module 114 and/or operating system ("O/S") module 207 of FIG. 2A to start applying thermal mitigation techniques but with the objective to maintain high-performance with little or no perception in degradations to the quality of service as perceived by the operator of the PCD 100.
  • O/S operating system
  • the thermal policy manager 101 may request the monitor 114 and/or the O/S module 207 to initiate thermal mitigation techniques such as, but not limited to, (1) load scaling and/or (2) load dynamic scaling and/or (3) spatial load shifting and/or (4) process load reallocation as described above.
  • the thermal policy manager 101 may start continuous monitoring, polling, or receiving interrupts from thermal sensors 157 so that temperature is sensed more continuously / frequently compared to the second lower thermal state 310.
  • the thermal policy manager 101 may apply or request that the monitor module 114 and/or O/S module 207 apply more aggressive thermal mitigation techniques and/or additional thermal mitigation techniques (relative to the second thermal state 310) with probable perceivable degradation of performance observed by an operator of the PCD 100.
  • the thermal policy manager 101 may cause reduction in power to one or more hardware devices like amplifiers, processors, etc or complete process load reallocation from high performance sub- processor blocks to lower power density main processor blocks.
  • the thermal policy manager 101 may also shift workloads among different hardware devices in a spatial manner, to bring active devices off-line and to bring inactive devices on-line.
  • the thermal mitigation techniques of this third and severe thermal state 315 may be the same as those described above with respect to the second, quality of service thermal state 310. However, these same thermal mitigation techniques may be applied in a more aggressive manner, as described above.
  • this thermal state 320 may be similar to conventional techniques that are designed to eliminate functionality and operation of a PCD 100 in order to avoid critical temperatures.
  • the fourth thermal state 320 may comprise a "critical" state in which the thermal policy manager 101 applies or triggers the shutting down of non-essential hardware and/or software.
  • the temperature range for this fourth thermal state may include those of about 100°C and above.
  • the submethod 625, 640, or 665 then returns to an appropriate step in the thermal management method 600 depending upon the current thermal state of the PCD 100.
  • FIG. 13A is a schematic 800A for a four-core multi-core processor 110 and different process loads that may be reallocated within the multi-core processor 110.
  • the multi-core processor 110 may be a graphics processor 110 for supporting graphical content projected on the display 132 or a central processor 110 for execution of various applications.
  • the four-core multi-core processor 110 has a zeroth core 222, a first core 224, a second core 226, and a third core 228.
  • the first process load scenario for the multi-core processor 110 is demonstrated by multi-core processor 110A in which the zeroth core 222 has a process workload of 70% (out of a 100% full work capacity/utilization for a particular core), while the first core 224 has a process workload of 30%>, the second core 226 has a process workload of 50%>, and the third core 228 has a process workload of 10%.
  • a process reallocation thermal load mitigation technique as illustrated in this FIG. 13A may be implemented. According to this process reallocation thermal load mitigation technique, the thermal policy manager 101, the monitor module 114, and/or the O/S module 207 may shift the process workload of one core to one or more other cores in a multi-core processor 110.
  • the process workload of the zeroth core 222 may be shifted such that additional work is performed by the remaining three other cores of the multi-core processor 110.
  • Multi-core processor HOB illustrates such a shift in that 20% of the process workload for the zeroth core 222 and 40% of the process workload for the second core 226 were shifted among the remaining two cores such that the process workload experienced by the zeroth core 222 was reduced down to 50%> while the process workload experienced by the second core 226 was reduced down to 10%>.
  • the process workload of the first core 224 was increased to 70%> while the process workload of the third core 228 was increased to 30%.
  • the multi-core processors 1 lOC-110D provide a demonstration of an exemplary shift of a "hole" in which one or more cores may effectively be cooled by letting them idle in a predetermined pattern or in a pattern dictated by thermal measurements. A 'hole' or core that is not being utilized is effectively moved in MIPS around a group of cores to cool surrounding cores in the course of several seconds.
  • the zeroth core 222 and the first core 224 may have exemplary workloads of 80% while the second core 226 and the third core 228 have no loads whatsoever.
  • the thermal policy manager 101 may apply or request that a process reallocation thermal load mitigation technique be applied in which all of the workload of the two active cores 222, 224 be shifted to the two inactive cores 226, 228.
  • the fourth processor HOD demonstrates such a shift in which the zeroth core 222 and first core 224 no longer have any workloads while the second core 226 and third core 228 have assumed the previous workload which was managed by the zeroth core 222 and first core 224.
  • the multi-core processors 1 lOE-110F provide a demonstration of the exemplary FIG. 12 process load reallocation thermal mitigation technique.
  • the FIG. 12 process load reallocation thermal mitigation technique is applied within a given core 228 such that a hotspot 48A within the core 228 may be effectively distributed over an increased area to form hotspot 48B.
  • hotspot 48A which has a high rate of energy dissipation per unit area, may be transformed into hotspot 48B which has a lower rate of energy dissipation per unit area.
  • the energy dissipation per unit area (and, thus, the temperature per unit area) may be lower for hotspot 48B because the processing area used to process the reallocated task has a lower power density per unit area than the high power density sub-processor. Additionally, the energy dissipation per unit area may also be lower for hotspot 48B than hotspot 48A because the reallocated processing task takes longer to complete, thus necessitating that less energy be dissipated per unit of area over a given unit of time.
  • thermal energy generation associated with a process load may be mitigated by reallocation of the process load.
  • An embodiment that includes a CPU 110E, 110F having a core 228 with a main processing block 228B and higher performing, sub-processor block 228A may have a main processing block 228B that represents three-fourths of the CPU 110E area and sub-processor block 228 A that represents the remaining quarter of the CPU 110E, 110F area.
  • the main processor block 228B may have an associated power density (“PD") that dissipates one-half of the total power of the overall CPU 110E, 110F while the sub-processor block 228 A having increased computational power relative to the main processor also has an associated power density that dissipates one-half of the total power.
  • PD power density
  • sub-processor block 228 A is processing 80% of a given process load such as, for example, a gaming application while main processor block 228B is processing a modest 20% remainder of the process load.
  • the increased computational power associated with sub-processor block 228A may establish an allocation bias for high computational applications from the scheduler 207, thus explaining the 80% process load burden being allocated to sub-processor block 228A. That is, because sub-processor block 228A is high powered, the default action from the scheduler 207 may be to allocate any application requiring high computational power to sub-processor 228A. However, excess or prolonged processing demands on sub-processor block 228A may generate excess thermal energy, as represented in the illustration by hotspot 48A. For purposes of illustration, hotspot 48A may be on the order of 80 °C, a temperature perhaps associated with the threshold to severe state 315.
  • sensors 157 placed near CPU 110E or even, more specifically, near processor core 228 may read hotspot 48A and subsequently trigger thermal policy manager module 101 to initiate a thermal mitigation technique including process load reallocation.
  • process load reallocation from a high power density sub-processor 228A to a lower power density main processor 228B will serve to lower the aggregate thermal dissipation across the core.
  • the thermal policy manager module 101 when triggered by temperature readings of various cores or areas within cores, may direct the O/S scheduler to assign new processing loads, or reallocate existing processing loads, based on a thermal bias factor associated with core temperatures.
  • a thermal bias factor may be assigned to the various processing cores or core sub-areas such that processing load burdens are allocated, or reallocated in a manner that manages thermal energy generation without overly sacrificing user experience or device performance.
  • a bias factor may be included in some embodiments that serves to drive processing burdens to the higher power density sub-cores.
  • core 228 of CPU 110F may have a
  • hotspot 48B may be on the order of 50 °C, a temperature perhaps associated with the threshold to normal state 305.
  • thermal load steering parameter(s) to reallocate processing loads from one component to another, such as, for example, from a sub-processor block of core 228 to a main processor block of core 228, may realize the benefit of lower temperatures associated with thermal energy dissipation over a larger area for what may be a relatively minor tradeoff of processing performance.
  • the main processor blocks 228B may process the load more slowly, thus translating to a lower QoS, but dissipate the thermal energy associated with a given workload over a larger area and longer time compared to the sub-processors 228A, thereby possibly avoiding critical temperatures in PCD 100.
  • FIG. 14 illustrates an exemplary floor plan 1400 of an application specific
  • ASIC integrated circuit
  • GPU bank 135 and CPU bank 110 represent the primary components generating thermal energy on ASIC 102.
  • Power management integrated circuits (“PMICs”) 182 do not reside on ASIC 102, but are represented as being in near proximity 1405 to CPU bank 110. For example, due to limited physical space within a PCD 100, PMICs 182 may reside immediately behind and adjacent to ASIC 102. As such, one of ordinary skill in the art will recognize that thermal energy dissipated from a PMIC 182, or other heat generating component, may adversely affect temperature readings taken from sensors 157 on any of cores 222, 224, 226, 228 within CPU 110.
  • PMICs 182, as well as other components residing within PCD 100, may be
  • thermal mitigation algorithms that can be leveraged in real-time, or near real-time, is that temperature bias in processing components which may result from adjacent components within PCD 100, such as the exemplary PMICs 182, can be accommodated without custom configurations or pre-generated thermal load steering scenarios and parameters. That is, processing loads can be allocated, or reallocated in real-time based on real-time, actual temperature readings.
  • the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted as one or more instructions or code on a computer-readable medium.
  • Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage media may be any available media that may be accessed by a computer.
  • such computer-readable media may comprise RAM, ROM,
  • any connection is properly termed a computer-readable medium.
  • the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (“DSL"), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium.
  • DSL digital subscriber line
  • Disk and disc includes compact disc (“CD”), laser disc, optical disc, digital versatile disc (“DVD”), fioppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer- readable media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Power Sources (AREA)
  • Microcomputers (AREA)
  • Debugging And Monitoring (AREA)

Abstract

Methods and systems for leveraging temperature sensors in a portable computing device ("PCD") are disclosed. The sensors may be placed within the PCD near known thermal energy producing components such as a central processing unit ("CPU") core, graphical processing unit ("GPU") core, power management integrated circuit ("PMIC"), power amplifier, etc. The signals generated by the sensors may be monitored and used to trigger drivers running on the processing units. The drivers are operable to cause the reallocation of processing loads associated with a given component's generation of thermal energy, as measured by the sensors. In some embodiments, the processing load reallocation is mapped according to parameters associated with pre-identified thermal load scenarios. In other embodiments, the reallocation occurs in real time, or near real time, according to thermal management solutions generated by a thermal management algorithm that may consider CPU and/or GPU performance specifications along with monitored sensor data.

Description

METHOD AND SYSTEM FOR THERMAL LOAD MANAGEMENT
IN A PORTABLE COMPUTING DEVICE
PRIORITY AND RELATED APPLICATIONS
This patent application claims priority under 35 U.S.C. § 119(e) to U.S.
Provisional Patent Application Serial No. 61/478,175 filed on April 22, 2011, entitled, "METHOD AND SYSTEM FOR THERMAL LOAD MANAGEMENT IN A PORTABLE COMPUTING DEVICE," the entire contents of which are hereby incorporated by reference.
DESCRIPTION OF THE RELATED ART
[0002] Portable computing devices (PCDs) are becoming necessities for people on personal and professional levels. These devices may include cellular telephones, portable digital assistants (PDAs), portable game consoles, palmtop computers, and other portable electronic devices.
[0003] One unique aspect of PCDs is that they typically do not have active cooling devices, like fans, which are often found in larger computing devices such as laptop and desktop computers. Instead of using fans, PCDs may rely on the spatial arrangement of electronic packaging so that two or more active and heat producing components are not positioned in close proximity to one another. When two or more heat producing components are suitably spaced from one another within a PCD, then heat generated from the operation of each component may not negatively impact the operation of the other. Moreover, when a heat producing component within a PCD is physically isolated from other components within the device, the heat generated from the operation of the heat producing component may not negatively impact other surrounding electronics. Many PCDs may also rely on passive cooling devices, such as heat sinks, to manage thermal energy among the electronic components which collectively form a respective PCD.
[0004] The reality is that PCDs are typically limited in size and, therefore, room for components within a PCD often comes at a premium. As such, there just typically isn't enough space within a PCD for engineers and designers to mitigate thermal degradation or failure through the leveraging of spatial arrangements or placement of passive cooling components. [0005] Currently, when a PCD approaches a critical temperature, the operating system is designed to cool the PCD by simply shutting down most of the electronic components within the PCD which are generating the excessive thermal energy. While shutting down electronics may be an effective measure for avoiding the generation of excessive thermal energy within a PCD, such drastic measures inevitably impact performance of a PCD and, in some cases, may even render a PCD functionally inoperable for a period time.
[0006] Accordingly, what is needed in the art is a method and system for thermal load management in a PCD that will promote cooling of components within the PCD without over-impacting its performance and functionality.
SUMMARY OF THE DISCLOSURE
[0007] Various embodiments of methods and systems for controlling and/or managing thermal energy generation on a portable computing device are disclosed. Because temperature readings may correlate to a process load within a thermal energy generating component, one such method involves placing a temperature sensor proximate to a thermal energy generating component of a chip in a portable computing device and then monitoring, at a first rate, temperature readings generated by the temperature sensor. Based on the detection of a first monitored temperature reading which may indicate that a processing area within the component, such as a high power density sub-processor area, has exceeded a temperature threshold, the method reallocates a portion of the process load running on the first processing area of the component to a second processing area of the component. Advantageously, because a processing workload has been spread across a larger processing area giving it a lower power density, reallocation of the process load portion serves to lower the amount of energy generated in any unit area of the component over a unit of time. Although user experience may suffer due to reduced quality of service ("QoS") associated with the lower power density second processing area, critically high temperatures concentrated in high power density processing areas may be avoided.
[0008] Exemplary methods may further comprise steps for subsequent reallocation of the process load from the second processing area to the first processing area when a second monitored temperature reading indicates that the component has cooled.
Advantageously, by making the second reallocation of process load after indication that the component has cooled, whether such load represents the process load that was initially reallocated from the first processing area or new processing loads queued for initial allocation, the QoS associated with the portable computing device can be returned to preferred levels.
Exemplary embodiments leverage temperature sensors strategically placed within a PCD near known thermal energy producing components such as, but not limited to, central processing unit ("CPU") cores, graphical processing unit ("GPU") cores, power management integrated circuits ("PMIC" or "PMICs"), power amplifiers, etc. Temperature signals generated by the sensors may be monitored and used to trigger drivers running on the processing units to cause the reallocation of processing loads correlating with a given component's excessive generation of thermal energy. In some embodiments, the processing load reallocation is mapped according to parameters associated with pre-identified thermal load scenarios. In other embodiments, the processing load reallocation occurs in real time, or near real time, according to thermal management solutions generated by a thermal management algorithm that may consider CPU and/or GPU performance specifications along with real time temperature sensor data.
BRIEF DESCRIPTION OF THE DRAWINGS
In the Figures, like reference numerals refer to like parts throughout the various views unless otherwise indicated. For reference numerals with letter character designations such as "102A" or "102B", the letter character designations may differentiate two like parts or elements present in the same Figure. Letter character designations for reference numerals may be omitted when it is intended that a reference numeral to encompass all parts having the same reference numeral in all Figures.
FIG. 1 is a functional block diagram illustrating an embodiment of a computer system for simulating thermal load distributions in a portable computing device
("PCD") and generating data for enabling the PCD to control the distribution of the thermal load;
FIG. 2 is a logical flowchart illustrating an embodiment of a method for generating the thermal load steering table of FIG. 1 for use by the PCD to control the distribution of thermal load;
FIG. 3 is a data diagram illustrating an embodiment of the thermal load steering table of FIG. 1; FIG. 4 A is an overhead schematic diagram of the spatial arrangement of an exemplary integrated circuit illustrating a thermal load distribution under a simulated workload;
FIG. 4B illustrates the integrated circuit of FIG. 4A in which the thermal load distribution is distributed to a location closer to a thermal sensor according to the thermal load steering parameters in the thermal load steering table of FIG. 3;
FIG. 5 is a logical flowchart illustrating an embodiment of a method for controlling thermal load distribution in the PCD of FIG. 1;
FIG. 6 is a functional block diagram illustrating an exemplary embodiment of the PCD of FIG. 1;
FIG. 7A is a functional block diagram illustrating an exemplary spatial arrangement of hardware for the chip illustrated in FIG. 6;
FIG. 7B is a schematic diagram illustrating an exemplary software architecture of the PCD of FIG. 6 for supporting dynamic voltage and frequency scaling ("DVFS") algorithms;
FIG. 7C is a first table listing exemplary frequency values for two DVFS algorithms;
FIG. 7D is a second table listing exemplary frequency and voltage pairs for two DVFS algorithms;
FIG. 8 is an exemplary state diagram that illustrates various thermal policy states that may be managed by the thermal policy manager in the PCD of FIG. 1;
FIG. 9 is a diagram illustrating exemplary thermal mitigation techniques that may be applied or ordered by the thermal policy manager;
FIG. 10 is a diagram illustrating an exemplary graph of temperature versus time and corresponding thermal policy states;
FIGs. 11 A & 1 IB are logical flowcharts illustrating a method for managing one or more thermal policies;
FIG. 12 is a logical flowchart illustrating a sub-method or subroutine for applying process load reallocation thermal mitigation techniques;
FIG. 13A is a schematic diagram for a four-core multi-core processor and different workloads that may be spatially managed with the multi-core processor;
FIG. 13B is a schematic diagram for a four-core multi-core processor and thermal energy dissipation hotspots that may be managed from process load reallocation algorithms with the multi-core processor; and FIG. 14 is a functional block diagram illustrating an exemplary spatial arrangement of hardware for the chip illustrated in FIG. 6 and exemplary components external to the chip illustrated in FIG. 6.
DETAILED DESCRIPTION
The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as exclusive, preferred or advantageous over other aspects.
In this description, the term "application" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, an "application" referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed.
The term "content" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, "content," as referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed.
As used in this description, the terms "component," "database," "module," "system," "thermal energy generating component," "processing component" and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device may be a component. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components may execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal). In this description, the terms "communication device," "wireless device," "wireless telephone," "wireless communication device" and "wireless handset" are used interchangeably. With the advent of third generation ("3G") and fourth generation ("4G") wireless technology, greater bandwidth availability has enabled more portable computing devices with a greater variety of wireless capabilities.
In this description, the terms "central processing unit ("CPU")," "digital signal processor ("DSP")," and "chip" are used interchangeably.
In this description, it will be understood that the terms "thermal" and "thermal energy" may be used in association with a device or component capable of generating or dissipating energy that can be measured in units of "temperature." Consequently, it will further be understood that the term "temperature," with reference to some standard value, envisions any measurement that may be indicative of the relative warmth, or absence of heat, of a "thermal energy" generating device or component. For example, the "temperature" of two components is the same when the two components are in "thermal" equilibrium.
In this description, the terms "workload," "process load" and "process workload" are used interchangeably and generally directed toward the processing burden, or percentage of processing burden, associated with a given processing component in a given embodiment. Further to that which is defined above, a
"processing component" or "thermal energy generating component" may be, but is not limited to, a central processing unit, a graphical processing unit, a core, a main core, a sub-core, a processing area, a hardware engine, etc. or any component residing within, or external to, an integrated circuit within a portable computing device. Moreover, to the extent that the terms "thermal load," "thermal distribution," "thermal signature," "thermal processing load" and the like are indicative of workload burdens that may be running on a processing component, one of ordinary skill in the art will acknowledge that use of these "thermal" terms in the present disclosure may be related to process load distributions and burdens.
In this description, the term "portable computing device" ("PCD") is used to describe any device operating on a limited capacity power supply, such as a battery. Although battery operated PCDs have been in use for decades, technological advances in rechargeable batteries coupled with the advent of third generation ("3G") wireless technology have enabled numerous PCDs with multiple capabilities. Therefore, a PCD may be a cellular telephone, a satellite telephone, a pager, a PDA, a smartphone, a navigation device, a smartbook or reader, a media player, a combination of the aforementioned devices, a laptop computer with a wireless connection, among others.
[0039] FIG. 1 illustrates an embodiment of a computer system for implementing various features related to thermal load management or "steering" in a PCD 100. In general, the computer system employs two main phases: (1) a simulation phase performed by a simulation computer 10; and (2) an operational phase performed by a PCD 100.
[0040] The simulation phase involves simulating thermal loads to be experienced by an integrated circuit 102 during operation of the PCD 100. The simulation computer 10 identifies thermal load conditions produced by the PCD 100 under simulated workloads. The simulated workloads may be associated with the running of a specific application or "use case" on a given PCD 100 or, alternatively, may not be associated with any specific or predictable processing load scenario.
[0041] The simulation computer 10 may determine that a simulated thermal load
distribution or "hotspot" on the silicon die may compromise user experience of PCD 100 or become otherwise detrimental to the functionality of PCD 100. Notably, as thermal energy dissipation may be increased when processing loads are concentrated in a given component, thereby potentially impacting PCD 100 performance and/or user experience, thermal energy generation can be mitigated by reallocation of processing load across complimentary components. The simulation computer 10 improves the PCD 100 performance and user experience by "steering" or reallocating all or a portion of the processing load from a first simulated location on the silicon die to a second simulated location that is available for processing. The second simulated location may be represented in commands, instructions, or any other suitable computer readable data (referred to as "thermal load steering parameter(s)") that may be provided to and used by the PCD 100 during the operational phase to steer the processing load to the second simulated location.
[0042] Moreover, in some embodiments, the preferred proximity of a likely hotspot to a sensor may be within a 5 degree Celsius range. That is, because temperature associated with a heat wave which has propagated from a hotspot will be lower as the distance to the hotspot is increased, and because there is inevitably a time lag between the time a hotspot begins to occur and the time that a temperature increase may be detected at a distance away from the hotspot, it may be preferred in some embodiments that a temperature sensor be placed at a distance from a hotspot that is predicted to correlate with a 5 degree Celsius drop in temperature. However, it will be understood that, although placement of temperature sensors within various embodiments may present novel aspects for such embodiments, the various embodiments and their equivalents are not limited to the placement of a temperature sensor in a location that is 5 °C from a known hotspot or thermal energy generating component. That is, in some
embodiments, it is envisioned that the sensors may be located closer to, or farther away from, a known hotspot or thermal energy generating component than 5 °C.
[0043] Referring to FIG. 1, the simulation computer 10 comprises one or more
processors 12, a memory 14, and one or more input/output devices 16 in communication with each other via a local interface. The memory 14 comprises a computer model 22 of the integrated circuit 102 used in the PCD 100. The computer model 22 is a data representation of the various hardware and software components in the PCD 100 and the spatial arrangement, architecture, and operation of the various components of the integrated circuit 102, including, for example, thermal sensors 157 and a CPU 110. A detailed exemplary embodiment of a PCD 100 is described below in more detail with reference to FIGS. 6, 7 A, 7B and 14. It should be appreciated that any PCD 100 and/or integrated circuit 102 may be modeled and represented in the computer model 22 provided to the simulation computer 10. The computer model 22 may comprise information such as, but not limited to, dimensions, size, and make-up of the printed circuit board ("PCB") stack, the amount of metal in traces, the sizes of the traces, the use of thermal bias, power load per sub block of the silicon die, power load per component on the PCB, use case specifics of the power load, any temporal dynamics of the power load, and other similar information as understood by one of ordinary skill in the art.
[0044] The thermal load simulation module(s) 20 interfaces with the computer model
22 and generally comprises the logic for performing the thermal load simulations based on the computer model 22. The thermal load simulation module(s) 20 generates the thermal load steering parameters 46 and stores them in, for example, the thermal load steering scenarios table 24, which is provided to the PCD 100. As illustrated in the embodiment of FIG. 1, the PCD 100 generally comprises thermal load steering module(s) 26, thermal policy manager module(s) 101, a monitor module 114, a central processing unit 110, one or more thermal sensors 157A located on the integrated circuit 102, and one or more thermal sensors 157B located off the integrated circuit 102. The thermal load steering module(s) 26 generally comprises the logic for monitoring the operations to be performed by the PCD 100 and determining whether thermal load steering should be performed. If thermal load steering is to be performed, the thermal load steering module(s) 26 accesses the thermal load steering scenarios table 24, interprets the thermal load steering parameter(s) 46, and schedules the workload in such a way to steer the processing load associated with the thermal load to underutilized, lower temperature or otherwise available processing capacity. Advantageously, such embodiments that leverage thermal load steering parameter(s) to reallocate a processing load to open processing capacity may realize the benefit of lower temperatures resulting from the reallocation.
[0045] One of ordinary skill in the art will recognize that the purpose of the thermal load steering parameter(s) 46 in some embodiments may further include provision of instructions to the thermal load steering module(s) 26 for steering a thermal load to a location near a certain thermal sensor or sensors 157. That is, it is envisioned that some embodiments may generate thermal load steering parameter(s) for the purpose of steering a processing load, which correlates to a given thermal load signature, to available processing capacity nearer a sensor 157. Advantageously, such embodiments that leverage thermal load steering parameter(s) to reallocate a processing load to open processing near a sensor may realize more accurate temperature measurement, thus leading to more efficient reallocation of processing load.
[0046] As a non- limiting example of how thermal energy dissipation may be managed via reallocation of processing loads, an embodiment that includes a CPU 110 having main processing blocks and higher performing, specialized sub-processor blocks, may have main processing blocks that represent ¾ of the CPU 110 area and sub-processor blocks that represent the remaining ¼ of the CPU area. The main processor blocks may have an associated power density ("PD") that dissipates ½ the total power of the overall CPU 110 while the sub-processor blocks also have an associated power density that dissipates ½ the total power. In such an exemplary case, one skilled in the art will recognize that the sub-processor blocks, which provide increased computational power to the overall CPU 110, represent a power density that is over twice that of the larger main processing blocks [PDsub = (P/2)/(A/4) = 2 P/A; PDmain = (P/2)/(3 A/4) = 2/3 of P/A] and, because power density is directly proportional to the generation of thermal energy, for a given processing load will cause generation and dissipation of more thermal energy than a main processing block. As such, embodiments that utilize thermal load steering parameter(s) to reallocate processing loads from one component to another, such as, for example, from a sub-processor block of CPU 110 to a main processor block of CPU 110, may realize the benefit of lower thermal energy dissipation for a relatively minor tradeoff of processing performance or Quality of Service ("QoS"). The main processor blocks may process the load more slowly, thus translating to a lower QoS, but dissipate less thermal energy than the sub-processors. Various benefits, features and aspects of managing thermal loads through the reallocation processing loads from one area to another within CPU 110, or the like, is explained in more detail relative to FIGs. 8-14.
[0047] Returning to the thermal load steering module(s) 26, it should be appreciated that the thermal load steering module(s) 26 may communicate with (or be integrated with one or more of) the thermal policy manager module(s) 101 , the monitor module 114, the CPU 110, or any other hardware or software components of the PCD 100.
[0048] FIG. 2 illustrates a method 28 implemented by the simulation computer 10. In an embodiment, the method 28 may be performed during the design and development of the integrated circuit 102 and the PCD 100 so that the devices may be appropriately configured to support the thermal load steering features. In other embodiments, the method 28 may be performed after the PCD 100 has been manufactured, in which case the thermal load steering feature may be enabled through appropriate software upgrades.
[0049] At block 30, the computer model 22 of the integrated circuit 102 is stored in the simulation computer 10 and accessed by the thermal load simulation module(s) 20. At block 32, computer simulation(s) are performed and one or more simulated thermal load conditions are identified (block 34). As known by one of ordinary skill in the art and illustrated in the example of FIG. 4A, a thermal load condition comprises a spatial thermal load distribution or "hotspot" 48 produced on the integrated circuit 102 under a simulated workload 44. The hotspot 48 as illustrated in FIG. 4A may be located on a first core 222 (FIG. 7A). As understood by one of ordinary skill in the art, measuring thermal energy (i.e. - temperature) of the hotspot 48 with a temperature sensor 157A at a point that is some distance from the hotspot 48 may be difficult due to the thermal wave moving across the surface of an object (i.e. - the computer chip or printed circuit board). The position of sensor 157A, which is at some distance relative to the hotspot 48, may not have the same temperature as the hotspot 48 itself. However, as described above, placement of the sensors proximate to components known to dissipate significant amounts of thermal energy, such as within 5 °C of the likely hotspot center, may provide data useful for more efficient reallocation of processing loads. [0050] To improve the effectiveness and accuracy of thermal load management algorithms, the simulation computer 10 may determine that the processing load associated with hotspot 48, or a portion of the processing load associated with hotspot 48, should be reallocated to an underutilized or available processing area. Based on the computer model 22, the simulation computer 10 may determine that at least a portion of the simulated workload 44 may be handled by a second core 224 instead of the first core 222, thereby mitigating potential thermal energy dissipation by spreading the processing load across the two cores 222, 224.
[0051] At block 36, the appropriate thermal load steering parameters 46 are generated for moving the processing load associated with hotspot 48 to a location on the second core 224 (see FIG. 4B). At block 38, the simulation computer 10 generates and stores the thermal load steering scenarios table 24 in the memory 14. As illustrated in FIG. 3, the thermal load steering scenarios table 24 may comprise a scenario 40 for each simulated thermal load condition with corresponding data such as, but not limited to, thermal load condition data 42, simulated workload data 44, and the thermal load steering parameter(s) 46. The load condition data, simulated workload data 44, and thermal load steering parameters(s) 46 may include, but are not limited to, separate use case breakdowns of power dissipation per power consuming (i.e. - heat generating) component, location of these dissipation points both on chip and off chip, expected amount of millions of instructions per second ("MIPS") per processor for a given use, total power dissipated on chip, total power dissipated by entire device, and other similar information as understood by one of ordinary skill in the art.
[0052] In the operational phase, the thermal load steering scenarios table 24 is provided to the PCD 100. FIG. 5 illustrates an embodiment of a method 50 implemented by the PCD 100 for performing thermal load steering. At block 52, the thermal load steering scenarios table 24 is stored in memory in the PCD 100. At block 54, the thermal load steering module(s) 26 monitors scheduled workloads for the PCD 100. In an embodiment, the monitoring may be performed by interfacing with an O/S scheduler 207 (See FIGs. 7A-7B), which receives and manages requests for hardware resources on the PCD 100. By monitoring the O/S scheduler requests, the thermal load steering module(s) 26 may compare the scheduled workloads to the simulated workload data 44 to determine if it matches one of the scenarios 40 in the table 24. If the scheduled workload matches a scenario 40 (decision block 56), the corresponding thermal load steering parameter(s) 46 may be obtained from the table 24 (block 58) and used to schedule, or otherwise reallocate, the workload on the PCD 100 (block 60).
If the scheduled workload does not match a scenario 40, then the "NO" branch from decision block 56 may be followed to optional block 57. In optional block 57, a default load steering vector may be accessed and used by the thermal load steering module 26 if the scheduled workload does not match a scenario 40. Alternatively, optional block 57 may be skipped in which the "NO" branch is followed back to decision block 56.
As mentioned above, when the workload is scheduled according to the thermal load steering parameter(s) 46, the resulting thermal load may be mitigated by more thermally efficient allocation of processing load across the PCD 100. At block 62, the PCD 100 may initiate any desirable thermal management policies.
Examples of various alternative embodiments of the PCD 100 and thermal management policies are described below in connection with FIGS. 6 - 14. FIG. 6 is a functional block diagram of an exemplary, non- limiting aspect of a PCD 100 in the form of a wireless telephone for implementing methods and systems for monitoring thermal conditions and managing thermal policies. Per some embodiments, PCD 100 may be configured to manage thermal load associated with graphics processing. As shown, the PCD 100 includes an on-chip system 102 that includes a multi-core central processing unit ("CPU") 110 and an analog signal processor 126 that are coupled together. The CPU 110 may comprise a zeroth core 222, a first core 224, and an Nth core 230 as understood by one of ordinary skill in the art. Further, instead of a CPU 110, a digital signal processor ("DSP") may also be employed as understood by one of ordinary skill in the art.
In general, the thermal policy manager module(s) 101 may be responsible for monitoring and applying thermal policies that include one or more thermal mitigation techniques that may help a PCD 100 manage thermal conditions and/or thermal loads and avoid experiencing adverse thermal conditions, such as, for example, reaching critical temperatures, while maintaining a high level of functionality.
FIG. 6 also shows that the PCD 100 may include a monitor module 114. The monitor module 114 communicates with multiple operational sensors (e.g., thermal sensors 157) distributed throughout the on-chip system 102 and with the CPU 110 of the PCD 100 as well as with the thermal policy manager module 101. The thermal policy manager module 101 may work with the monitor module 114 to identify adverse thermal conditions and apply thermal policies that include one or more thermal mitigation techniques as will be described in further detail below.
[0058] As illustrated in FIG. 6, a display controller 128 and a touch screen controller
130 are coupled to the digital signal processor 110. A touch screen display 132 external to the on-chip system 102 is coupled to the display controller 128 and the touch screen controller 130.
[0059] PCD 100 may further include a video encoder 134, e.g., a phase-alternating line
("PAL") encoder, a sequential couleur avec memoire ("SECAM") encoder, a national television system(s) committee ("NTSC") encoder or any other type of video encoder 134. The video encoder 134 is coupled to the multi-core central processing unit ("CPU") 110. A video amplifier 136 is coupled to the video encoder 134 and the touch screen display 132. A video port 138 is coupled to the video amplifier 136. As depicted in FIG. 6, a universal serial bus ("USB") controller 140 is coupled to the CPU 110. Also, a USB port 142 is coupled to the USB controller 140. A memory 112 and a subscriber identity module (SIM) card 146 may also be coupled to the CPU 110.
Further, as shown in FIG. 6, a digital camera 148 may be coupled to the CPU 110. In an exemplary aspect, the digital camera 148 is a charge-coupled device ("CCD") camera or a complementary metal-oxide semiconductor ("CMOS") camera.
[0060] As further illustrated in FIG. 6, a stereo audio CODEC 150 may be coupled to the analog signal processor 126. Moreover, an audio amplifier 152 may be coupled to the stereo audio CODEC 150. In an exemplary aspect, a first stereo speaker 154 and a second stereo speaker 156 are coupled to the audio amplifier 152. FIG. 6 shows that a microphone amplifier 158 may be also coupled to the stereo audio CODEC 150.
Additionally, a microphone 160 may be coupled to the microphone amplifier 158. In a particular aspect, a frequency modulation ("FM") radio tuner 162 may be coupled to the stereo audio CODEC 150. Also, an FM antenna 164 is coupled to the FM radio tuner 162. Further, stereo headphones 166 may be coupled to the stereo audio CODEC 150.
[0061] FIG. 6 further indicates that a radio frequency ("RF") transceiver 168 may be coupled to the analog signal processor 126. An RF switch 170 may be coupled to the RF transceiver 168 and an RF antenna 172. As shown in FIG. 6, a keypad 174 may be coupled to the analog signal processor 126. Also, a mono headset with a microphone 176 may be coupled to the analog signal processor 126. Further, a vibrator device 178 may be coupled to the analog signal processor 126. FIG. 6 also shows that a power supply 180, for example a battery, is coupled to the on-chip system 102. In a particular aspect, the power supply includes a rechargeable DC battery or a DC power supply that is derived from an alternating current ("AC") to DC transformer that is connected to an AC power source.
[0062] The CPU 110 may also be coupled to one or more internal, on-chip thermal sensors 157A as well as one or more external, off-chip thermal sensors 157B. The on- chip thermal sensors 157A may comprise one or more proportional to absolute temperature ("PTAT") temperature sensors that are based on vertical PNP structure and are usually dedicated to complementary metal oxide semiconductor ("CMOS") very large-scale integration ("VLSI") circuits. The off-chip thermal sensors 157B may comprise one or more thermistors. The thermal sensors 157 may produce a voltage drop that is converted to digital signals with an analog-to-digital converter ("ADC") controller 103 (See FIG. 7A). However, other types of thermal sensors 157 may be employed without departing from the scope of the invention.
[0063] The thermal sensors 157, in addition to being controlled and monitored by an
ADC controller 103, may also be controlled and monitored by one or more thermal policy manager module(s) 101. The thermal policy manager module(s) may comprise software which is executed by the CPU 110. However, the thermal policy manager module(s) 101 may also be formed from hardware and/or firmware without departing from the scope of the invention. The thermal policy manager module(s) 101 may be responsible for monitoring and applying thermal policies that include one or more thermal mitigation techniques that may help a PCD 100 avoid critical temperatures while maintaining a high level of functionality.
[0064] Briefly referring back to FIG. 1, FIG. 1 also shows that the PCD 100 may
include a monitor module 114. The monitor module 114 communicates with multiple operational sensors distributed throughout the on-chip system 102 and with the CPU 110 of the PCD 100 as well as with the thermal policy manager module 101. The thermal policy manager module 101 may work with the monitor module to apply thermal policies that include one or more thermal mitigation techniques as will be described in further detail below.
[0065] Returning to FIG. 6, the touch screen display 132, the video port 138, the USB port 142, the camera 148, the first stereo speaker 154, the second stereo speaker 156, the microphone 160, the FM antenna 164, the stereo headphones 166, the RF switch 170, the RF antenna 172, the keypad 174, the mono headset 176, the vibrator 178, thermal sensors 157B, and the power supply 180 are external to the on-chip system 322. However, it should be understood that the monitor module 114 may also receive one or more indications or signals from one or more of these external devices by way of the analog signal processor 126 and the CPU 110 to aid in the real time management of the resources operable on the PCD 100.
In a particular aspect, one or more of the method steps described herein may be implemented by executable instructions and parameters stored in the memory 112 that form the one or more thermal policy manager module(s) 101. These instructions that form the thermal policy manager module(s) may be executed by the CPU 110, the analog signal processor 126, or another processor, in addition to the ADC controller 103 to perform the methods described herein. Further, the processors 110, 126, the memory 112, the instructions stored therein, or a combination thereof may serve as a means for performing one or more of the method steps described herein.
FIG. 7A is a functional block diagram illustrating an exemplary spatial arrangement of hardware for the chip 102 illustrated in FIG. 6. According to this exemplary embodiment, the applications CPU 110 is positioned on the far left side region of the chip 102 while the modem CPU 168, 126 is positioned on a far right side region of the chip 102. The applications CPU 110 may comprise a multi-core processor that includes a zeroth core 222, a first core 224, and an Nth core 230. The applications CPU 110 may be executing a thermal policy manager module 101 A (when embodied in software) or it may include a thermal policy manager module 101 A (when embodied in hardware). The application CPU 110 is further illustrated to include operating system ("O/S") module 207 and a monitor module 114. Further details about the monitor module 114 will be described below in connection with FIG. 7B.
The applications CPU 110 may be coupled to one or more phase locked loops ("PLLs") 209A, 209B, which are positioned adjacent to the applications CPU 110 and in the left side region of the chip 102. Adjacent to the PLLs 209A, 209B and below the applications CPU 110 may comprise an analog-to-digital ("ADC") controller 103 that may include its own thermal policy manager 10 IB that works in conjunction with the main thermal policy manager module 101 A of the applications CPU 110.
The thermal policy manager 10 IB of the ADC controller 103 may be responsible for monitoring and tracking multiple thermal sensors 157 that may be provided "on-chip" 102 and "off-chip" 102. The on-chip or internal thermal sensors 157A may be positioned at various locations. For example, a first internal thermal sensor 157A1 may be positioned in a top center region of the chip 102 between the applications CPU 110 and the modem CPU 168,126 and adjacent to internal memory 112. A second internal thermal sensor 157A2 may be positioned below the modem CPU 168, 126 on a right side region of the chip 102. This second internal thermal sensor 157A2 may also be positioned between an advanced reduced instruction set computer ("RISC") instruction set machine ("ARM") 177 and a first graphics processor 135 A. A digital-to-analog controller ("DAC") 173 may be positioned between the second internal thermal sensor 157A2 and the modem CPU 168, 126.
A third internal thermal sensor 157A3 may be positioned between a second graphics processor 135B and a third graphics processor 135C in a far right region of the chip 102. A fourth internal thermal sensor 157A4 may be positioned in a far right region of the chip 102 and beneath a fourth graphics processor 135D. And a fifth internal thermal sensor 157A5 may be positioned in a far left region of the chip 102 and adjacent to the PLLs 209 and ADC controller 103.
One or more external thermal sensors 157B may also be coupled to the ADC controller 103. The first external thermal sensor 157B1 may be positioned off-chip and adjacent to a top right quadrant of the chip 102 that may include the modem CPU 168, 126, the ARM 177, and DAC 173. A second external thermal sensor 157B2 may be positioned off-chip and adjacent to a lower right quadrant of the chip 102 that may include the third and fourth graphics processors 135C, 135D.
One of ordinary skill in the art will recognize that various other spatial arrangements of the hardware illustrated in FIG. 7 A may be provided without departing from the scope of the invention. FIG. 7A illustrates yet one exemplary spatial arrangement and how the main thermal policy manager module 101 A and ADC controller 103 with its thermal policy manager 10 IB may manage thermal states that are a function of the exemplary spatial arrangement illustrated in FIG. 7A.
FIG. 7B is a schematic diagram illustrating an exemplary software architecture of the PCD 100 of FIG. 6 and FIG. 7A for supporting dynamic voltage and frequency scaling ("DVFS") algorithms. DVFS algorithms may form or be part of at least one thermal mitigation technique that may be triggered by the thermal policy manager 101 when certain thermal conditions are met as will be described in detail below.
As illustrated in FIG. 7B, the CPU or digital signal processor 110 is coupled to the memory 112 via a bus 211. The CPU 110, as noted above, is a multiple-core processor having N core processors. That is, the CPU 110 includes a first core 222, a second core 224, and an Nth core 230. As is known to one of ordinary skill in the art, each of the first core 222, the second core 224 and the ΝΛ core 230 are available for supporting a dedicated application or program. Alternatively, one or more applications or programs can be distributed for processing across two or more of the available cores.
The CPU 110 may receive commands from the thermal policy manager module(s) 101 that may comprise software and/or hardware. If embodied as software, the thermal policy manager module 101 comprises instructions that are executed by the CPU 110 that issues commands to other application programs being executed by the CPU 110 and other processors.
The first core 222, the second core 224 through to the Nth core 230 of the CPU 110 may be integrated on a single integrated circuit die, or they may be integrated or coupled on separate dies in a multiple-circuit package. Designers may couple the first core 222, the second core 224 through to the ΝΛ core 230 via one or more shared caches and they may implement message or instruction passing via network topologies such as bus, ring, mesh and crossbar topologies.
In the illustrated embodiment, the RF transceiver 168 is implemented via digital circuit elements and includes at least one processor such as the core processor 210 (labeled "Core"). In this digital implementation, the RF transceiver 168 is coupled to the memory 112 via bus 213.
Each of the bus 211 and the bus 213 may include multiple communication paths via one or more wired or wireless connections, as is known in the art. The bus 211 and the bus 213 may have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, to enable
communications. Further, the bus 211 and the bus 213 may include address, control, and/or data connections to enable appropriate communications among the
aforementioned components.
When the logic used by the PCD 100 is implemented in software, as is shown in FIG. 7B, it should be noted that one or more of startup logic 250, management logic 260, dynamic voltage and frequency scaling ("DVFS") interface logic 270, applications in application store 280 and portions of the file system 290 may be stored on any computer-readable medium for use by or in connection with any computer-related system or method. [0081] As understood by one of ordinary skill in the art, the demand for processors that provide high performance and low power consumption has led to the use of dynamic voltage and frequency scaling ("DVFS") in processor designs. DVFS enables trade-offs between power consumption and performance. Processors 110 and 126 (FIG. 6) may be designed to take advantage of DVFS by allowing the clock frequency of each processor to be adjusted with a corresponding adjustment in voltage. Reducing clock frequency alone is not useful, since any power savings is offset by an increase in execution time, resulting in no net reduction in the total energy consumed. However, a reduction in operating voltage results in a proportional savings in power consumed. One main issue for DVFS enabled processors 110, 126 is how to control the balance between performance and power savings.
[0082] In the context of this document, a computer-readable medium is an electronic, magnetic, optical, or other physical device or means that can contain or store a computer program and data for use by or in connection with a computer-related system or method. The various logic elements and data stores may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a "computer- readable medium" can be any means that can store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[0083] The computer-readable medium can be, for example but not limited to, an
electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random-access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc readonly memory (CDROM) (optical). Note that the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, for instance via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
[0084] In an alternative embodiment, where one or more of the startup logic 250,
management logic 260 and perhaps the DVFS interface logic 270 are implemented in hardware, the various logic may be implemented with any or a combination of the following technologies, which are each well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc.
[0085] The memory 112 is a non-volatile data storage device such as a flash memory or a solid-state memory device. Although depicted as a single device, the memory 112 may be a distributed memory device with separate data stores coupled to the digital signal processor and or the core 210 (or additional processor cores) in the RF transceiver 168.
[0086] The startup logic 250 includes one or more executable instructions for
selectively identifying, loading, and executing a select program for managing or controlling the performance of one or more of the available cores such as the first core 222, the second core 224 through to the Nth core 230. A select program can be found in the program store 296 of the embedded file system 290 and is defined by a specific combination of a performance scaling algorithm 297 and a set of parameters 298. The select program, when executed by one or more of the core processors in the CPU 110 and the core 210 in the RF transceiver 168, may operate in accordance with one or more signals provided by the monitor module 114 in combination with control signals provided by the one or more thermal policy manager module(s) 101 to scale the performance of the respective processor core. In this regard, the monitor module 114 may provide one or more indicators of events, processes, applications, resource status conditions, elapsed time, as well as temperature as received from the thermal policy manager module 101.
[0087] The management logic 260 includes one or more executable instructions for terminating an operative performance scaling program on one or more of the respective processor cores, as well as selectively identifying, loading, and executing a more suitable replacement program for managing or controlling the performance of one or more of the available cores. The management logic 260 is arranged to perform these functions at run time or while the PCD 100 is powered and in use by an operator of the device. A replacement program can be found in the program store 296 of the embedded file system 290 and is defined by a specific combination of a performance scaling algorithm 297 and a set of parameters 298.
[0088] The replacement program, when executed by one or more of the core processors in the digital signal processor or the core 210 in the RF transceiver 168, may operate in accordance with one or more signals provided by the monitor module 1 14 or one or more signals provided on the respective control inputs of the various processor cores to scale the performance of the respective processor core. In this regard, the monitor module 114 may provide one or more indicators of events, processes, applications, resource status conditions, elapsed time, temperature, etc in response to control signals originating from the thermal policy manager 101.
[0089] The DVFS interface logic or interface logic 270 includes one or more executable instructions for presenting, managing and interacting with external inputs to observe, configure, or otherwise update information stored in the embedded file system 290. In one embodiment, the interface logic 270 may operate in conjunction with manufacturer inputs received via the USB port 142. These inputs may include one or more programs to be deleted from or added to the program store 296. Alternatively, the inputs may include edits or changes to one or more of the programs in the program store 296.
Moreover, the inputs may identify one or more changes to, or entire replacements of one or both of the startup logic 250 and the management logic 260. By way of example, the inputs may include a change to the management logic 260 that instructs the PCD 100 to suspend all performance scaling in the RF transceiver 168 when the received signal power falls below an identified threshold. By way of further example, the inputs may include a change to the management logic 260 that instructs the PCD 100 to apply a desired program when the video codec 134 is active.
[0090] The interface logic 270 enables a manufacturer to controllably configure and adjust an end user's experience under defined operating conditions on the PCD 100. When the memory 112 is a flash memory, one or more of the startup logic 250, the management logic 260, the interface logic 270, the application programs in the application store 280 or information in the embedded file system 290 can be edited, replaced, or otherwise modified. In some embodiments, the interface logic 270 may permit an end user or operator of the PCD 100 to search, locate, modify or replace the startup logic 250, the management logic 260, applications in the application store 280 and information in the embedded file system 290. The operator may use the resulting interface to make changes that will be implemented upon the next startup of the PCD 100. Alternatively, the operator may use the resulting interface to make changes that are implemented during run time.
[0091] The embedded file system 290 includes a hierarchically arranged DVFS store
292. In this regard, the file system 290 may include a reserved section of its total file system capacity for the storage of information for the configuration and management of the various parameters 298 and performance scaling algorithms 297 used by the PCD 100. As shown in FIG. 7B, the DVFS store 292 includes a core store 294, which includes a program store 296, which includes one or more DVFS programs. Each program is defined as a combination of a respective performance scaling algorithm and a set of parameters associated with the specific algorithm. As a further example of the hierarchical nature of the DVFS store 292, a particular member of a set of files may be located and identified by the path of \startup\coreO\algorithm\parameter set. In this example, a program is identified by the algorithm in combination with the contents of information stored in the parameter set. For example, a conventional DVFS algorithm known as "classic" may be identified to manage performance scaling on coreO 222 in accordance with the parameters sample rate, samples to increase, and samples to decrease as follows: \startup\coreO\classic\SampleRate, with a value of 100, where the sample rate is in MHz; \startup\coreO\classic\SamplesToIncrease, with a value of 2, where the samples to increase is an integer; and
\startup\coreO\classic\SamplesToDecrease, with a value of 1 , where the samples to decrease is an integer.
[0092] That is, the respective filenames define a parameter and the value of the
parameter is identified by the contents of the file. The algorithm is defined by a periodic sampling of the CPU idle percentage and operates in accordance with a low threshold (% idle) and a high threshold (% idle). If a samples-to-increase threshold comparator indicates for two consecutive samples that performance should be increased, the DVFS algorithm increases performance in accordance with a predetermined clock level adjustment. Conversely, if a samples-to-decrease threshold comparator indicates for 1 consecutive sample that performance should be decreased, the DVFS algorithm decreases performance in accordance with the predetermined clock level (i.e., frequency) adjustment. As explained above, processor or core operating voltage may be changed together with changes in the clock frequency. [0093] Alternatively, or additionally, the DVFS store 292 may be arranged such that the search path starts from the most specific with respect to its application (i.e., the processor core, algorithm, and parameter value) progresses to the least specific with respect to application. In an example embodiment, parameters are defined in the directories /coreO, /coreAll and /default in association with the "classic" performance scaling algorithm. For example, the path \coreO\classic\SampleRate - applies only to the classic algorithm operating on coreO. This most specific application will override all others. The path \coreAll\classic\SampleRate - applies to any processor core running the classic algorithm. This application is not as specific as the example path above but is more specific than \default\classic\SampleRate - which applies to any processor core running the classic algorithm.
[0094] This default application is the least specific and is used only if no other suitable path exists in the DVFS store 292. The first parameter found will be the one used. The \default location will always have a valid parameter file. The architecture of the individual cores, the architecture of the one or more shared caches and the
mechanism(s) used to pass instructions between the cores, as well as the desired use cases for the PCD 100 are expected to dictate the nature of the various performance scaling algorithms 297 stored in the memory 112.
[0095] FIG. 7C is a first table 267 listing exemplary frequency values for two different
DVFS algorithms that may be selected by the DVFS interface logic 270. According to this exemplary first table 267, each core of the multi-core CPU 110 may be assigned specific maximum clock frequency values depending upon the current DVFS algorithm being executed. For the first DVFS algorithm that is listed in the first row of the table 627, Core 0 may be assigned a maximum clock frequency of 600 MHz, while Core 1 may be assigned a maximum clock frequency of 650 MHz, and the Nth Core may be assigned a maximum clock frequency of 720 MHz. For the second DVFS algorithm that is listed in the second row of the table 627, Core 0 may be assigned a maximum clock frequency of 610 MHz, while Core 1 is assigned a maximum clock frequency of 660 MHz, and the Nth core may be assigned a maximum clock frequency of 700 MHz. These limits on clock frequency may be selected by the thermal policy manager 101 depending upon the current thermal state of the PCD 100.
[0096] FIG. 7D is a second table 277 listing exemplary frequency and voltage pairs for two DVFS algorithms. For a first DVFS algorithm listed in the first row of the table 277, Core 0 may be assigned a maximum clock frequency of 600 MHz while its maximum voltage may be limited to 1.3 mV. Core 1 may be assigned a maximum clock frequency of 500 MHz and a corresponding maximum voltage of 2.0 mV. Core N may be assigned a maximum clock frequency of 550 MHz and a corresponding maximum voltage of 1.8 mV. For the second DVFS algorithm listed in the second row of the table 277, Core 0 may be assigned a maximum clock frequency of 550 MHz while the maximum voltage is assigned the value of 1.0 mV. Core 1 may be assigned a maximum clock frequency of 600 MHz and the corresponding maximum voltage of 1.5 mV. And lastly, Core N may be assigned a maximum clock frequency of 550 MHz and a corresponding maximum voltage of 1.9 mV. The thermal policy manager 101 may select the various pairs of frequency and voltages enumerated in table 277 depending upon the current thermal state of the PCD 100.
[0097] FIG. 8 is an exemplary state diagram 300 that illustrates various thermal policy states 305, 310, 315, and 320 that are tracked by the thermal policy manager 101. The first policy state 305 may comprise a "normal" state in which the thermal policy manager 101 only monitors thermal sensors 157 in a routine or ordinary fashion. In this exemplary first and normal state 305, the PCD 100 is usually not in any danger or risk of reaching critical temperatures that may cause failure of any of the hardware and/or software components. In this exemplary state, the thermal sensors 157 may be detecting or tracking temperatures that are at 50°C or below. However, one of ordinary skill in the art will recognize that other temperature ranges may be established for the first and normal state 305 without departing from the scope of the invention.
[0098] The second policy state 310 may comprise a "quality of service" or "QoS" state in which the thermal policy manager 101 may increase the frequency in which thermal sensors 157 are polled or in which the thermal sensors 157 send their temperature status reports to the thermal policy manager 101. This exemplary second state 310 may be reached or entered into by the thermal policy manager 101 when a change of temperature has been detected in the first, normal state 305. The threshold or magnitude of the change in temperature (delta T) which triggers this QoS state 310 may be adjusted or tailored according to a particular PCD 100. Therefore, while a PCD 100 may be operating in the first normal state 305, depending upon the magnitude of the change in temperature that is detected by one or more thermal sensors, the PCD 100 may leave the first normal state 305 and enter into the second QoS state 310 as tracked by the thermal policy manager 101. [0099] For example, a PCD 100 may have a first maximum temperature reading from a given thermal sensor 157 of approximately 40°C. And a second reading from the same thermal sensor 157 may show a change in temperature of only 5°C which takes the maximum temperature being detected to 45°C. However, while the maximum temperature being detected may be below an established threshold of 50°C for the first, normal state 305, the change in temperature by 5°C may be significant enough for the thermal policy manager 101 to change the state to the second, QoS state 310.
[00100] In the second, QoS thermal state 310 the thermal policy manager 101 may
request or it may actually perform one or more thermal mitigation techniques in order to reduce the thermal load and temperature of the PCD 100. In this particular state 310, the thermal policy manager 101 is designed to implement or request thermal mitigation techniques that may be barely perceivable by an operator and which may degrade a quality of service provided by the PCD 100 in a minimal fashion. The temperature range for this second, QoS thermal state 310 may comprise a range between about 50°C to about 80°C. One of ordinary skill in the art will recognize that other temperature ranges may be established for the second QoS state 305 and are within the scope of the invention.
[00101] As noted previously, the second, QoS state 310 may be triggered based on the magnitude and/or location of the change in temperature and are not necessarily limited to the endpoints of a selected temperature range. Further details about this second, QoS thermal state 310 will be described below in connection with FIG. 9.
[00102] The third thermal state 315 may comprise a "severe" state in which the thermal policy manager 101 continues to monitor and/or receives interrupts from thermal sensors 157 while requesting and/or applying more aggressive thermal mitigation techniques relative to the second, QoS state 310 described above. This means that in this state the thermal policy manager 101 is less concerned about quality of service from the perspective of the operator. In this thermal state, the thermal policy manager 101 is more concerned about mitigating or reducing thermal load in order to decrease temperature of the PCD 100. In this third thermal state 315, a PCD 100 may have degradations in performance that are readily perceived or observed by an operator. The third, severe thermal state 315 and its corresponding thermal mitigation techniques applied or triggered by the thermal policy manager 101 will be described in further detail below in connection with FIG. 9. The temperature range for this third, severe thermal state 310 may comprise a range between about 80°C to about 100°C. [00103] Similar to the first thermal state 305 and second thermal state 310 as discussed above, this third and severe thermal state 315 may be initiated based upon the change in temperature detected by one or more thermal sensors 157 and not necessarily limited to a temperature range established or mapped for this third thermal state 315. For example, as the arrows in this diagram illustrate, each thermal state may be initiated in sequence or they can be initiated out of sequence depending upon the magnitude of the change in temperature (delta T) that may be detected. So this means that the PCD 100 may leave the first and normal thermal state 305 and enter into or initiate the third and severe thermal state 315 based on a change in temperature that is detected by one or more thermal sensors 157, and vice versa. Similarly, the PCD 100 may be in the second or QoS thermal state 310 and enter into or initiate the fourth or critical state 320 based on a change in temperature that is detected by one or more thermal sensors 157, and vice versa. In this exemplary third and critical state 320, the thermal policy manager 101 is applying or triggering as many and as sizable thermal mitigation techniques as possible in order to avoid reaching one or more critical temperatures that may cause permanent damage to the electronics contained within the PCD 100.
[00104] This fourth and critical thermal state 320 may be similar to conventional
techniques that are designed to eliminate functionality and operation of a PCD 100 in order to avoid critical temperatures. The fourth thermal state 320 may comprise a "critical" state in which the thermal policy manager 101 applies or triggers the shutting down of non-essential hardware and/or software. The temperature range for this fourth thermal state may include those of about 100°C and above. The fourth and critical thermal state 320 will be described in further detail below in connection with FIG. 9.
[00105] The thermal policy management system is not limited to the four thermal states
305, 310, 315, and 320 illustrated in FIG. 8. Depending upon a particular PCD 100, additional or fewer thermal states may be provided without departing from the scope of the invention. That is, one of ordinary skill in the art recognizes that additional thermal states may improve functionality and operation of a particular PCD 100 while in other situations fewer thermal states may be preferred for a particular PCD 100 that has its own unique hardware and/or software.
[00106] FIG. 9 is a diagram illustrating exemplary thermal mitigation techniques that may be applied or ordered by the thermal policy manager 101 and are dependent upon a particular thermal state of a PCD 100. It should be appreciated that the thermal mitigation techniques described herein may be applied to manage thermal loads associated with any type of processing, but may be particularly useful in situations involving graphics processing due to inherent power demands, system requirements, and importance to the overall user experience of the PCD 100. As noted previously, the first thermal state 305 may comprise a "normal" state in which the thermal policy manager 101 being executed by the CPU 110 and partially by the ADC controller 103 may monitor, poll, or receive one or more status reports on temperature from one or more thermal sensors 157. In this first thermal state 305, a PCD 100 may not be in any danger or risk of reaching a critical temperature that may harm one or more software and/or hardware components within the PCD 100. Usually, in this first thermal state, the thermal policy manager 101 is not applying or has not requested any initiation of thermal mitigation techniques such that the PCD 100 is operating at its fullest potential and highest performance without regard to thermal loading. The temperature range for this first thermal state 305 may include those of 50°C and below. For this first thermal state 305, the thermal policy manager 101 may reside in the ADC controller 103 while the main thermal policy manager 101 for all other states may reside or be executed by the CPU 110. In an alternate exemplary embodiment, the thermal policy manager 101 may reside only in the CPU 110.
[00107] In the second thermal state 310 also referred to as the QoS state 310, once it is initiated, the thermal policy manager 101 may begin more rapid monitoring, polling, and/or receiving of interrupts (relative to the first thermal state 305) from thermal sensors 157 regarding current temperature of the PCD 100. In this exemplary second thermal state 310, the thermal policy manager 101 may initiate or request the monitor module 114 and/or operating system ("O/S") module 207 of FIG. 7A to start applying thermal mitigation techniques but with the objective to maintain high-performance with little or no perception in degradations to the quality of service as perceived by the operator of the PCD 100.
[00108] According to this exemplary second thermal state 310 illustrated in FIG. 9, the thermal policy manager 101 may request the monitor 114 and/or the O/S module 207 to initiate thermal mitigation techniques such as, but not limited to, (1) load scaling and/or (2) load dynamic scaling; (3) spatial load shifting; and (4) process load reallocation. Load scaling may comprise adjusting or "scaling" the maximum clock frequency allowed in DVFS algorithm, such as the values provided in the first table 267 of FIG. 7C. Such an adjustment may limit the maximum heat dissipation. This thermal load mitigation technique may also involve adjusting the voltage to match the standard DVFS table used for a particular and unique PCD 100.
The thermal load mitigation technique of load dynamic scaling may comprise the scaling of one and/or all/ of the N application processor cores 222, 224, and 230. This thermal load mitigation technique may comprise establishing the max clock frequency allowed for the DVFS algorithm of a particular core 222, 224, or 230. The DVFS algorithm will use a table of voltage/frequency pairs, such as the second table 277 illustrated in FIG. 7D, to scale processing capability.
One such way includes limiting the number of millions of instructions per second ("MIPS") by limiting the max frequency allowed. In this way, the thermal policy manager 101 is effectively limiting the power consumption of the core(s) 222, 224, and 230 and limiting their capability (MIPS) that is available. The thermal policy manager 101 may choose to limit N cores 222, 224, 230 together, or it can select and choose which cores 222, 224, 230 get scaled back while allowing other cores 222, 224, 230 to operate in an unconstrained manner. The thermal policy manager 101, monitor module 114, and/or O/S module 207 may make their decisions on which cores 222, 224, 230 to control based on data received from thermal sensors 157 or software application requirements based, and/or best effort prediction. The temperature range for this second thermal state may include those of about 50° C to about 80°C.
The thermal load mitigation technique of spatial load shifting comprises the activation and deactivation of cores within a multi-core processor system. If N multiple cores exist, each core may be loaded up with work or its performance maximized using up to N-l cores and then as a thermal sensor 157 indicates a heating problem, the location of an inactive core functioning as a cooling device may be shifted. Each core may effectively be cooled by letting it idle in a predetermined pattern or in a pattern dictated by thermal measurements. A 'hole' is effectively moved in MIPS around the cores to cool them are in the course of several seconds. In this way, several GHz of processing power may be made available to a PCD 100, while still cooling the silicon die by moving the load around. Further details of spatial load shifting will be described below in connection with FIGs. 13 A.
The thermal mitigation technique of process load reallocation is described below in connection with FIGs. 12-14. In general, however, this technique is directed to the management of thermal energy creation and dissipation resulting from the operation of multi-core graphics processing units ("GPU") and/or multi-core central processing units ("CPU"). Ideally, for the efficient implementation of a thermal mitigation technique in the form of a process load reallocation algorithm, PCD 100 may have temperature sensors 157 in close proximity to individual cores or groups of cores. Based on temperature readings from sensors 157, drivers executed on one or more of the cores themselves may be leveraged to cause a process load reallocation from a "hot" core to a "cool," or otherwise less utilized, core. Advantageously, embodiments of various thermal mitigation techniques, such as process load reallocation and spatial load shifting may be implemented in real-time, or near real-time, as the thermal policy manager module(s) 101 may be operable to react to temperature readings which fluctuate with processing loads. Thus, in embodiments operable to take thermal mitigation measures in real-time, or near real-time, based on active monitoring of temperature readings from sensors 157, one of ordinary skill in the art will recognize that predefined thermal steering scenarios 24 may not be required. That is, some embodiments may utilize algorithms that, based on real-time temperature inputs and workload data, can generate instructions for efficient reallocation or spatial shifting of processing load.
[00113] Notably, in some embodiments, such as embodiments designed for process load reallocation in multi-core CPUs having cores which contain both main processing blocks with low power density and specialized, sub-processor blocks with high power density ratings, process loads may be reallocated within a given core. For example, process loads requiring high computational power such as, but not limited to, gaming applications having excessive graphical processing requirements, may normally be scheduled for processing at a sub-core level to benefit from the improved computational capacity of the sub-core. An overloaded process queue at a sub-core, however, may generate excessive thermal energy that could be detrimental to the CPU 110 or other components comprised within the PCD 100. In such a scenario, the thermal energy load may be mitigated by reallocating within the given core (as opposed to between cores) all or part of the process load from the high density sub-processor block to the lower power density main process block.
[00114] Referring now to the third thermal state 315 of FIG. 9, also known as the severe thermal state 315, the thermal policy manager 101 may start continuous monitoring, polling, or receiving interrupts from thermal sensors 157 so that temperature is sensed more continuously / frequently compared to the second lower thermal state 310. In this exemplary thermal state 315, the thermal policy manager 101 may apply or request that the monitor module 114 and/or O/S module 207 apply more aggressive thermal mitigation techniques and/or additional thermal mitigation techniques (relative to the second thermal state 310) with probable perceivable degradation of performance observed by an operator of the PCD 100. According to this exemplary thermal state 315, the thermal policy manager 101 may cause reduction in power to one or more hardware devices like amplifiers, processors, etc. The thermal policy manager 101 may also shift workloads among different hardware devices in a spatial manner in order to bring active devices off-line and to bring inactive devices on-line. The thermal mitigation techniques of this third and severe thermal state 315 may be the same as those described above with respect to the second, quality of service thermal state 310. However, these same thermal mitigation techniques may be applied in a more aggressive manner. For example, in reallocation of process loads, the thermal policy manager 101 may request that a larger percentage of process loads are reallocated from the high power density sub-processor blocks to the main processor blocks of the various cores, as compared to the second thermal state 310. Further, the thermal policy manager 101 may request that active process loads are completely reallocated from the high power density sub-processor blocks to the main processor blocks of the various cores, effectively taking the high thermal energy generating sub-processor blocks offline. These process load allocations may result in less than desirable processing performance that what is recommended for supporting a particular application program.
[00115] Referring now to the fourth and critical state 320 of FIG. 9, the thermal policy manager 101 may start shutting down or requesting the monitor 114 and/or O/S module 207 to start shutting down all nonessential hardware and/or software modules.
[00116] "Nonessential" hardware and/or software modules may be different for each type of particular PCD 100. According to one exemplary embodiment, all nonessential hardware and/or software modules may include all of those outside of an emergency 911 telephone call function and global positioning satellite ("GPS") functions. This means that the thermal policy manager 101 in this fourth, critical thermal state 320 may cause the shutdown of hardware and/or software modules that are outside of emergency 911 telephone calls and GPS functions. The thermal policy manager 101 may shut down modules in sequence and/or in parallel depending upon the critical temperatures being monitored by the thermal sensors 157 and the change in temperature being observed by the thermal policy manager 101. The temperature range for this fourth thermal state 320 may include those of about 100°C and above. [00117] FIG. 10 is a diagram illustrating an exemplary graph 500 of temperature versus time and corresponding thermal policy states 305, 310, 315, and 320. At the first point 503 of the temperature plot or line 505, the thermal policy manager 101 may receive a first interrupt temperature reading of 40°C from one or more thermal sensors 157.
Because this first temperature reading of 40°C may be below the maximum temperature of 50°C set for the normal thermal state 305, the thermal policy manager 101 may remain in the first or normal thermal state 305.
[00118] At a second point 506 along the temperature line 505, the thermal policy
manager 101 may receive a second interrupt temperature reading of 50°C. Though 50°C may be within the selected temperature range for the first thermal state 305, if the change in temperature from the last temperature reading was significant, such as a large temperature change within a short period of time (like a 3°C change within five seconds), then such a change or jump in temperature may trigger the thermal policy manager 101 to leave the normal thermal state 305 and initiate the second, QoS thermal state 310.
[00119] Between the second point 506 and third point 509 of the temperature line 505, the temperature of the PCD 100 was above 50°C and the thermal policy manager 101 may have requested or activated one or more thermal mitigation techniques in order to lower the temperature of the PCD 100. At the third point 509 of the temperature line 505, the thermal policy manager 101 may change the thermal state of the PCD 100 from the second state 310 to the first and normal state 305.
[00120] At the fourth point 512, the thermal policy manager 101 may observe that the temperature trend is moving in an upward fashion or, in other words, the temperature line 505 may have a positive slope or change in delta T. The thermal policy manager 101 may change the thermal state of the PCD 100 in view of this data from the first thermal state 305 to the second, QoS thermal state 310. In the second thermal state 310, the thermal policy manager 101 may request or it may activate one or more thermal mitigation techniques that should not significantly impact the quality of service provided by the PCD 100. The second thermal state 310 may include a temperature range of about 50°C to about 80°C.
[00121] Moving along the temperature line 505 to the fifth point 515 which has a
magnitude of about 80°C, the thermal policy manager 101 may initiate a change of thermal state from the second, QoS thermal state 310 to the third and severe thermal state 315. As noted previously, the temperature range for this first thermal state may include a range of about 80°C to about 100°C. In this third and severe thermal state 310, the thermal policy manager 101 may be requesting or activating a plurality of thermal mitigation techniques that may impact the quality of service and performance of the PCD 100.
The segment of the temperature line 505 between the fifth point 515 and sixth point 518 reflects that the third and severe thermal state 310 has been unsuccessful in mitigating the temperature rise within the PCD 100. Therefore, at the sixth point 518 which may have a magnitude of approximately 100°C, the thermal policy manager 101 may enter into the fourth and critical state 320. In this fourth and critical state 320, the thermal policy manager 101 may activate or request that certain hardware and/or software components be shut down in order to alleviate the current thermal load. As noted previously, the thermal policy manager 101 may cause any hardware and/or software component outside of emergency 911 call functions and GPS functions to be shut down while in this fourth thermal state 320.
Moving along the temperature line 505 to the seventh point 521, the segment of the line 505 between the sixth point 518 and seventh point 521 reflects that the critical thermal state 320 and severe thermal state 315 were successful in lowering the temperature of the PCD 100. As noted previously, one or more thermal states may be jumped or skipped depending upon the temperature measured by the thermal sensors 157 and observed by the thermal policy manager 101.
FIGs. 11 A & 1 IB are logical flowcharts illustrating a method 600 for managing one or more thermal policies of a PCD 100. Method 600A of FIG. 11A starts with first block 605 in which the thermal policy manager 101 may monitor temperature with internal and external thermal sensors 157 while in a first thermal state 305. This first block 605 generally corresponds with the first thermal state 305 illustrated in FIGs. 8 & 9. As noted previously, the thermal policy manager 101 may monitor, actively poll, and/or receive interrupts from one or more thermal sensors 157. In this particular thermal state, the thermal policy manager 101 does not apply any thermal mitigation techniques. The PCD 100 may perform at its optimal level without regard to any thermal loading conditions in this first thermal state.
Next, in decision block 610, the thermal policy manager 101 may determine if a temperature change (delta T) has been detected by one or more thermal sensors 157. If the inquiry to decision block 610 is negative, then the "NO" branch is followed back to block 605. If the inquiry to decision block 610 is positive, then the "YES" branch is followed to block 615 in which the thermal policy manager 101 may increase the frequency of the monitoring of the thermal sensors 157. In block 615, the thermal policy manager may actively poll the thermal sensors 157 more frequently or it may request the thermal sensors 157 to send more frequent interrupts that provide temperature data. This increased monitoring of thermal sensors 157 may occur in the first or normal state 305 and it may also occur in the second or quality of service thermal state 310.
[00126] Next, in decision block 620, the thermal policy manager 101 may determine if the next thermal state has been reached or achieved by the PCD 100. In this decision block 620, the thermal policy manager 101 may be determining if the temperature range assigned to the second thermal state 310 has been achieved. Alternatively, the thermal policy manager in this decision block 620 may be determining if a significant change in temperature (delta T) has occurred since a last reading.
[00127] If the inquiry to decision block 620 is negative, then the "NO" branch is
followed back to decision block 610. If the inquiry to decision block 620 is positive, then the "YES" branch is followed to routine or subroutine 625. Routine or subroutine 625 may comprise a second thermal state 310 also referred to as the QoS state 310 in which thermal policy manager 101 may apply or request one or more thermal mitigation techniques described above in connection with FIG. 9. For example, the thermal policy manager 101 may request the monitor 114 and/or the O/S module 207 to initiate thermal mitigation techniques such as, but not limited to, (1) load scaling and/or (2) load dynamic scaling and/or (3) spatial load shifting and/or (4) process load reallocation as described above.
[00128] Subsequently, in decision block 630, the thermal policy manager 101 may
determine if the one or more thermal mitigation techniques of the second or QoS state 310 were successful and if the current temperature as detected by the one or more thermal sensors 157 falls within the next lower thermal range for the first or normal state 305. If the inquiry to decision block 630 is positive, then the "YES" branch is followed back to block 605. If the inquiry to decision block 630 is negative, then the "NO" branch is followed to decision block 635.
[00129] In decision block 635, the thermal policy manager 101 may determine if the
PCD 100 has now entered into the third or severe thermal state 315 according to the temperature as detected by the one or more thermal sensors 157. Alternatively, the thermal policy manager 101 may determine if the PCD 100 has entered into the third or severe thermal state 315 by determining if a significant change in temperature (delta T) has occurred.
[00130] If the inquiry to decision block 635 is negative, the "NO" branch is followed back to decision block 620. If the inquiry to decision block 635 is positive, then the "YES" branch is followed to submethod or subroutine 640.
[00131] In submethod or subroutine 640, the thermal policy manager 101 has determined that the PCD 100 has entered into the third or severe thermal state. The thermal policy manager 101 may then activate or request that one or more thermal mitigation techniques be applied. As noted previously, the thermal policy manager 101 in this third or severe thermal state 315 may start continuous monitoring, polling, or receiving interrupts from thermal sensors 157 so that temperature is sensed more continuously / frequently compared to the second lower thermal state 310.
[00132] In this exemplary thermal state 315, the thermal policy manager 101 may apply or request that the monitor module 114 and/or O/S module 207 apply more aggressive thermal mitigation techniques and/or additional thermal mitigation techniques (relative to the second thermal state 310) with probable perceivable degradation of performance observed by an operator of the PCD 100. According to this exemplary thermal state 315, the thermal policy manager 101 may cause reduction in power to one or more hardware devices like amplifiers, processors, etc. The thermal policy manager 101 may also shift workloads among different hardware devices in a spatial manner in order to bring active devices off-line and to bring inactive devices on-line. Further, the thermal policy manager may increase the percentage of process loads reallocated from a high performance sub-processor block to the main processor blocks. The thermal mitigation techniques of this third and severe thermal state 315 may be the same as those described above with respect to the second, quality of service thermal state 310. As explained above, however, these same thermal mitigation techniques may be applied in a more aggressive manner.
[00133] Next, in decision block 645, the thermal policy manager 101 may determine that the one or more thermal mitigation techniques applied in subroutine 640 were successful to prevent escalation of temperature for the PCD 100. If the inquiry to decision block 645 is negative, then the "NO" branch is followed to step 655 of FIG. 1 IB. If the inquiry to decision block 645 is positive, then the "YES" branch is followed to step 650 in which the thermal policy manager 101 determines the current thermal state of the PCD 100 based on temperature readings provided by the one or more thermal sensors 157.
[00134] FIG. 1 IB is a continuation flow chart relative to the flowchart illustrated in FIG.
11A. The method 600B of FIG. 1 IB starts with decision block 655 in which the thermal policy manager 101 may determine if the PCD 100 has entered into the fourth or critical thermal state 320 based on the temperature being detected by one or more thermal sensors 157. If the inquiry to decision block 655 is negative, then the "NO" branch is followed to step 660 in which the thermal policy manager 101 returns the PCD 100 to the third or severe thermal state 315 and the process returns to block 635 of FIG. 11 A.
[00135] If the inquiry to decision block 655 is positive, then the "YES" branch is
followed to subroutine 665 in which the thermal policy manager 101 activates or requests that one or more critical thermal mitigation techniques be implemented. The thermal policy manager 101 in this fourth, critical thermal state 320 may cause the shutdown of hardware and/or software modules that are outside of emergency 911 telephone calls and GPS functions. The thermal policy manager 101 may shut down modules in sequence and/or in parallel depending upon the critical temperatures being monitored by the thermal sensors 157 and the change in temperature being observed by the thermal policy manager 101.
[00136] Subsequently, in decision block 670, the thermal policy manager 101 may
determine that the thermal mitigation techniques applied in routine or submethod 665 were successful to prevent any escalation of temperature of the PCD 100 as detected by the thermal sensors 157. If the inquiry to decision block 670 is negative, then the "NO" branch is followed back to routine or submethod 665.
[00137] If the inquiry to decision block 670 is positive, then the "YES" branch is
followed to step 675 in which the thermal policy manager 101 determines the current thermal state of the PCD 100 based on temperature readings supplied by one or more thermal sensors 157. Once the temperature readings are assessed by the thermal policy manager 101, the thermal policy manager 101 initiates the thermal state corresponding to the temperature ranges detected by the thermal sensors 157.
[00138] FIG. 12 is a logical flowchart illustrating sub-method or subroutines 625, 640, and 665 for applying process load reallocation thermal mitigation techniques. Block 705 is the first step in the submethod or subroutine for applying process load reallocation thermal mitigation techniques. In this first block 705, the thermal policy manager 101 may determine the current thermal state based on temperature readings provided by thermal sensors 157 most proximate to the various CPU and/or GPU cores. Once the current thermal state is determined by the thermal policy manager 101, in block 710 the thermal policy manager 101 may then review the current process load allocations for the various cores associated with the temperature readings. Next, in block 715, the thermal policy manager 101 may review the current workloads of one or more available, or otherwise underutilized, hardware and/or software modules.
[00139] Next, in block 720, the thermal policy manager 101 may reallocate or issue commands to reallocate the current workloads among the various cores, in order to reduce workload or to shift the workload. The proportion of processing load reallocation, the particular portion of process load which is reallocated and the processing location to which load is reallocated, may be accomplished according to the current thermal state determined by the thermal policy manager 101. Advantageously, by reducing workload in a core, or area of a core, that is associated with a high temperature reading through reallocation of all or part of the workload to another core or area, thermal energy generation can be mitigated.
[00140] So, for the second or QoS thermal state 310, in block 720, the thermal policy manager 101 may initiate or request the monitor module 114 and/or operating system ("O/S") module 207 of FIG. 2A to start applying thermal mitigation techniques but with the objective to maintain high-performance with little or no perception in degradations to the quality of service as perceived by the operator of the PCD 100.
[00141] According to this exemplary second thermal state 310 illustrated in FIG. 9, the thermal policy manager 101 may request the monitor 114 and/or the O/S module 207 to initiate thermal mitigation techniques such as, but not limited to, (1) load scaling and/or (2) load dynamic scaling and/or (3) spatial load shifting and/or (4) process load reallocation as described above.
[00142] For the third or severe terminal state 315, in block 720, the thermal policy
manager 101 may start continuous monitoring, polling, or receiving interrupts from thermal sensors 157 so that temperature is sensed more continuously / frequently compared to the second lower thermal state 310. In this exemplary thermal state 315, the thermal policy manager 101 may apply or request that the monitor module 114 and/or O/S module 207 apply more aggressive thermal mitigation techniques and/or additional thermal mitigation techniques (relative to the second thermal state 310) with probable perceivable degradation of performance observed by an operator of the PCD 100. According to this exemplary thermal state 315, the thermal policy manager 101 may cause reduction in power to one or more hardware devices like amplifiers, processors, etc or complete process load reallocation from high performance sub- processor blocks to lower power density main processor blocks.
The thermal policy manager 101 may also shift workloads among different hardware devices in a spatial manner, to bring active devices off-line and to bring inactive devices on-line. The thermal mitigation techniques of this third and severe thermal state 315 may be the same as those described above with respect to the second, quality of service thermal state 310. However, these same thermal mitigation techniques may be applied in a more aggressive manner, as described above.
For the fourth or critical terminal state 320, in block 720, this thermal state 320 may be similar to conventional techniques that are designed to eliminate functionality and operation of a PCD 100 in order to avoid critical temperatures. The fourth thermal state 320 may comprise a "critical" state in which the thermal policy manager 101 applies or triggers the shutting down of non-essential hardware and/or software. The temperature range for this fourth thermal state may include those of about 100°C and above. The submethod 625, 640, or 665 then returns to an appropriate step in the thermal management method 600 depending upon the current thermal state of the PCD 100.
FIG. 13A is a schematic 800A for a four-core multi-core processor 110 and different process loads that may be reallocated within the multi-core processor 110. The multi-core processor 110 may be a graphics processor 110 for supporting graphical content projected on the display 132 or a central processor 110 for execution of various applications.
The four-core multi-core processor 110 has a zeroth core 222, a first core 224, a second core 226, and a third core 228. The first process load scenario for the multi-core processor 110 is demonstrated by multi-core processor 110A in which the zeroth core 222 has a process workload of 70% (out of a 100% full work capacity/utilization for a particular core), while the first core 224 has a process workload of 30%>, the second core 226 has a process workload of 50%>, and the third core 228 has a process workload of 10%. If the thermal policy manager 101 enters any one of the thermal states 310, 315, 320 described above in which thermal mitigation techniques are applied to the PCD 100, a process reallocation thermal load mitigation technique as illustrated in this FIG. 13A may be implemented. According to this process reallocation thermal load mitigation technique, the thermal policy manager 101, the monitor module 114, and/or the O/S module 207 may shift the process workload of one core to one or more other cores in a multi-core processor 110.
[00147] In the exemplary embodiment illustrated in FIG. 13 A, the process workload of the zeroth core 222 may be shifted such that additional work is performed by the remaining three other cores of the multi-core processor 110. Multi-core processor HOB illustrates such a shift in that 20% of the process workload for the zeroth core 222 and 40% of the process workload for the second core 226 were shifted among the remaining two cores such that the process workload experienced by the zeroth core 222 was reduced down to 50%> while the process workload experienced by the second core 226 was reduced down to 10%>. Meanwhile, the process workload of the first core 224 was increased to 70%> while the process workload of the third core 228 was increased to 30%. One of ordinary skill in the art recognizes that other magnitudes and
combinations of shifting workload and corresponding work load percentages are well within the scope of the invention.
[00148] The multi-core processors 1 lOC-110D provide a demonstration of an exemplary shift of a "hole" in which one or more cores may effectively be cooled by letting them idle in a predetermined pattern or in a pattern dictated by thermal measurements. A 'hole' or core that is not being utilized is effectively moved in MIPS around a group of cores to cool surrounding cores in the course of several seconds. In the exemplary embodiment illustrated by multi-core processor 1 IOC of FIG. 13 A, the zeroth core 222 and the first core 224 may have exemplary workloads of 80% while the second core 226 and the third core 228 have no loads whatsoever. In this scenario, if either or both of the zeroth core 222 and first core 224 reach the second thermal state 310, the third thermal state 315, or the fourth thermal state 320, then the thermal policy manager 101 may apply or request that a process reallocation thermal load mitigation technique be applied in which all of the workload of the two active cores 222, 224 be shifted to the two inactive cores 226, 228. The fourth processor HOD demonstrates such a shift in which the zeroth core 222 and first core 224 no longer have any workloads while the second core 226 and third core 228 have assumed the previous workload which was managed by the zeroth core 222 and first core 224.
[00149] In FIG. 13B, the multi-core processors 1 lOE-110F provide a demonstration of the exemplary FIG. 12 process load reallocation thermal mitigation technique. The FIG. 12 process load reallocation thermal mitigation technique is applied within a given core 228 such that a hotspot 48A within the core 228 may be effectively distributed over an increased area to form hotspot 48B. Advantageously, by reallocating a process load burden from a high powered sub-processor block 228A to a main processor block 228B, hotspot 48A, which has a high rate of energy dissipation per unit area, may be transformed into hotspot 48B which has a lower rate of energy dissipation per unit area. The energy dissipation per unit area (and, thus, the temperature per unit area) may be lower for hotspot 48B because the processing area used to process the reallocated task has a lower power density per unit area than the high power density sub-processor. Additionally, the energy dissipation per unit area may also be lower for hotspot 48B than hotspot 48A because the reallocated processing task takes longer to complete, thus necessitating that less energy be dissipated per unit of area over a given unit of time.
[00150] Returning to a previous example, thermal energy generation associated with a process load may be mitigated by reallocation of the process load. An embodiment that includes a CPU 110E, 110F having a core 228 with a main processing block 228B and higher performing, sub-processor block 228A, may have a main processing block 228B that represents three-fourths of the CPU 110E area and sub-processor block 228 A that represents the remaining quarter of the CPU 110E, 110F area. The main processor block 228B may have an associated power density ("PD") that dissipates one-half of the total power of the overall CPU 110E, 110F while the sub-processor block 228 A having increased computational power relative to the main processor also has an associated power density that dissipates one-half of the total power.
[00151] In such an exemplary case, one of ordinary skill in the art will recognize that the sub-processor block 228A, which provides increased computational power to the overall CPU 110E, 110F represents a power density that is over twice that of the larger main processing block 228B [PD228A = (P/2)/(A/4) = 2 P/A; PD228B = (P/2)/(3A/4) = 2/3 of P/A] and, because power density is directly proportional to the generation of thermal energy, for a given processing load the sub-processor block 228A will cause the dissipation of more thermal energy than main processing block 228B.
[00152] As illustrated by CPU 110E, sub-processor block 228 A is processing 80% of a given process load such as, for example, a gaming application while main processor block 228B is processing a modest 20% remainder of the process load.
Advantageously, the increased computational power associated with sub-processor block 228A (relative to the main processing block 228B) may establish an allocation bias for high computational applications from the scheduler 207, thus explaining the 80% process load burden being allocated to sub-processor block 228A. That is, because sub-processor block 228A is high powered, the default action from the scheduler 207 may be to allocate any application requiring high computational power to sub-processor 228A. However, excess or prolonged processing demands on sub-processor block 228A may generate excess thermal energy, as represented in the illustration by hotspot 48A. For purposes of illustration, hotspot 48A may be on the order of 80 °C, a temperature perhaps associated with the threshold to severe state 315.
[00153] As previously described, sensors 157 placed near CPU 110E or even, more specifically, near processor core 228 may read hotspot 48A and subsequently trigger thermal policy manager module 101 to initiate a thermal mitigation technique including process load reallocation. One of ordinary skill in the art will realize that process load reallocation from a high power density sub-processor 228A to a lower power density main processor 228B will serve to lower the aggregate thermal dissipation across the core. Moreover, it is envisioned that the thermal policy manager module 101, when triggered by temperature readings of various cores or areas within cores, may direct the O/S scheduler to assign new processing loads, or reallocate existing processing loads, based on a thermal bias factor associated with core temperatures. That is, based on the real-time temperature readings of the various processing cores or core sub-areas, it is envisioned that a thermal bias factor may be assigned to the various processing cores or core sub-areas such that processing load burdens are allocated, or reallocated in a manner that manages thermal energy generation without overly sacrificing user experience or device performance. Moreover, in an effort to ensure that QoS remains at its highest level without jeopardizing component integrity, it is envisioned that a bias factor may be included in some embodiments that serves to drive processing burdens to the higher power density sub-cores.
[00154] After reallocation of the process load, core 228 of CPU 110F may have a
workload allocation of 60% to main processor block 228B and 40% to sub-processor block 228A. In the illustration, the reduction of processing burden from the high PD sub-processor block 228A and the relative increase of processing burden to the lower PD main processor block 228B inevitably caused a reduction in QoS. However, the reallocation of the process burden, or a portion thereof, to the lower PD main processor block 228B caused the generation of thermal energy to be spread across a larger area or footprint of the core 228 thus creating a larger area with a decreased temperature per unit of area relative to the previous smaller area, as is illustrated by the "cooler" and larger hotspot 48B. For purposes of illustration, hotspot 48B may be on the order of 50 °C, a temperature perhaps associated with the threshold to normal state 305.
[00155] From the FIG. 13B example, it can be seen that embodiments utilizing thermal load steering parameter(s) to reallocate processing loads from one component to another, such as, for example, from a sub-processor block of core 228 to a main processor block of core 228, may realize the benefit of lower temperatures associated with thermal energy dissipation over a larger area for what may be a relatively minor tradeoff of processing performance. The main processor blocks 228B may process the load more slowly, thus translating to a lower QoS, but dissipate the thermal energy associated with a given workload over a larger area and longer time compared to the sub-processors 228A, thereby possibly avoiding critical temperatures in PCD 100.
[00156] FIG. 14 illustrates an exemplary floor plan 1400 of an application specific
integrated circuit ("ASIC") 102 that may benefit from the application of various thermal mitigation techniques such as those described above. In the FIG. 14 illustration, GPU bank 135 and CPU bank 110 represent the primary components generating thermal energy on ASIC 102. Power management integrated circuits ("PMICs") 182, for example, do not reside on ASIC 102, but are represented as being in near proximity 1405 to CPU bank 110. For example, due to limited physical space within a PCD 100, PMICs 182 may reside immediately behind and adjacent to ASIC 102. As such, one of ordinary skill in the art will recognize that thermal energy dissipated from a PMIC 182, or other heat generating component, may adversely affect temperature readings taken from sensors 157 on any of cores 222, 224, 226, 228 within CPU 110.
[00157] PMICs 182, as well as other components residing within PCD 100, may be
placed in immediate proximity 1405 to a given processing core, thereby generating a bias in the processing core for a higher average operating temperature when the thermal energy dissipated from the components propagates through the core. One of ordinary skill in the art will recognize that the adverse affect of these proximate components on processing core temperature can be difficult to predict or simulate across various PCD 100 configurations and/or use cases. As such, one of ordinary skill in the art will also recognize that an advantage of thermal mitigation algorithms that can be leveraged in real-time, or near real-time, is that temperature bias in processing components which may result from adjacent components within PCD 100, such as the exemplary PMICs 182, can be accommodated without custom configurations or pre-generated thermal load steering scenarios and parameters. That is, processing loads can be allocated, or reallocated in real-time based on real-time, actual temperature readings.
Certain steps in the processes or process flows described in this specification naturally precede others for the invention to function as described. However, the invention is not limited to the order of the steps described if such order or sequence does not alter the functionality of the invention. That is, it is recognized that some steps may performed before, after, or parallel (substantially simultaneously with) other steps without departing from the scope and spirit of the invention. In some instances, certain steps may be omitted or not performed without departing from the invention. Further, words such as "thereafter", "then", "next", etc. are not intended to limit the order of the steps. These words are simply used to guide the reader through the description of the exemplary method.
Additionally, one of ordinary skill in programming is able to write computer code or identify appropriate hardware and/or circuits to implement the disclosed invention without difficulty based on the flow charts and associated description in this specification, for example.
Therefore, disclosure of a particular set of program code instructions or detailed hardware devices is not considered necessary for an adequate understanding of how to make and use the invention. The inventive functionality of the claimed computer implemented processes is explained in more detail in the above description and in conjunction with the drawings, which may illustrate various process flows.
In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted as one or more instructions or code on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise RAM, ROM,
EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a computer. [00162] Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line ("DSL"), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium.
[00163] Disk and disc, as used herein, includes compact disc ("CD"), laser disc, optical disc, digital versatile disc ("DVD"), fioppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer- readable media.
[00164] Therefore, although selected aspects have been illustrated and described in
detail, it will be understood that various substitutions and alterations may be made therein without departing from the spirit and scope of the present invention, as defined by the following claims.

Claims

CLAIMS What is claimed is:
1. A method for managing thermal energy generation in a portable computing device, the method comprising:
placing a temperature sensor proximate to a thermal energy generating component of a chip in a portable computing device;
monitoring, at a first rate, temperature readings generated by the temperature sensor, wherein the temperature readings correlate to a process load within the thermal energy generating component; and
based on a first monitored temperature reading, reallocating a process load portion from a first processing area of the thermal energy generating component to a second processing area of the thermal energy generating component, wherein reallocation of the process load portion serves to lower the amount of energy generated at any unit area of the component over a unit of time.
2. The method of claim 1, further comprising:
based on a second monitored temperature reading, reallocating a process load portion from the second processing area of the thermal energy generating component to the first processing area of the thermal energy generating component.
3. The method of claim 2, wherein the second monitored temperature reading indicates that the temperature of the first processing area has cooled relative to the first monitored temperature reading.
4. The method of claim 1 , wherein the reallocated process load portion equals about one-hundred percent of the process load in the first processing area.
5. The method of claim 1, wherein the first processing area is a sub-processor within a core which also contains the second processing area.
6. The method of claim 1, wherein the first processing area has an associated power density that exceeds the power density associated with the second processing area.
7. The method of claim 1, further comprising:
defining a plurality of thermal states, wherein each thermal state contains a range of temperatures; and
wherein the first monitored temperature reading indicates that the temperature of the thermal energy generating component has increased from a temperature contained in a first thermal state to a temperature contained in a second thermal state.
8. The method of claim 7, further comprising:
based on a second monitored temperature reading, reallocating a process load portion from the second processing area of the thermal energy generating component to the first processing area of the thermal energy generating component, wherein the second monitored temperature reading indicates that the temperature of the thermal energy generating component has decreased from a temperature contained in the second thermal state to a temperature contained in the first thermal state.
9. The method of claim 7, wherein the amount of load to be reallocated from the first processing area to the second processing area varies according to the thermal state which contains the first monitored temperature reading.
10. The method of claim 7, further comprising increasing the rate of monitoring from the first rate to a second rate, wherein the increased rate of monitoring is triggered by the first monitored temperature reading.
11. A computer system for managing thermal energy generation in a portable computing device, the system comprising:
a multi-core processor operable to:
monitor, at a first rate, temperature readings generated by a proximately placed temperature sensor, wherein the temperature readings correlate to a process load within a given core of the multi-core processor; and
based on a first monitored temperature reading, reallocate a process load portion from a first processing area of the given core to a second processing area of the given core, wherein reallocation of the process load portion serves to lower the amount of energy generated at any unit area of the component over a unit of time.
12. The system of claim 11, wherein the multi-core processor is further operable to: based on a second monitored temperature reading, reallocate a process load portion from the second processing area of the given core to the first processing area of the given core.
13. The system of claim 12, wherein the second monitored temperature reading indicates that the temperature of the first processing area has cooled relative to the first monitored temperature reading.
14. The system of claim 11 , wherein the reallocated process load portion equals about one-hundred percent of the process load in the first processing area.
15. The system of claim 11, wherein the first processing area is a sub-processor within the given core.
16. The system of claim 11, wherein the first processing area has an associated power density that exceeds the power density associated with the second processing area.
17. The system of claim 11, wherein the multi-core processor is further operable to recognize a plurality of thermal states, wherein:
each thermal state contains a range of temperatures; and
the first monitored temperature reading indicates that the temperature of the given core has increased from a temperature contained in a first thermal state to a temperature contained in a second thermal state.
18. The system of claim 17, wherein the multi-core processor is further operable to: based on a second monitored temperature reading, reallocate a process load portion from the second processing area of the given core to the first processing area of the given core, wherein the second monitored temperature reading indicates that the temperature of the given core has decreased from a temperature contained in the second thermal state to a temperature contained in the first thermal state.
19. The system of claim 17, wherein the amount of load reallocated from the first processing area to the second processing area varies according to the thermal state which contains the first monitored temperature reading.
20. The system of claim 17, wherein the multi-core processor is further operable to increase the rate of monitoring from the first rate to a second rate, wherein the increased rate of monitoring is triggered by the first monitored temperature reading.
21. A computer system for managing thermal energy generation in a portable computing device, the system comprising:
means for monitoring, at a first rate, temperature readings generated by a temperature sensor placed proximate to a thermal energy generating component of a chip in a portable computing device, wherein the temperature readings correlate to a process load within the thermal energy generating component; and
means for reallocating a process load portion from a first processing area of the thermal energy generating component to a second processing area of the thermal energy generating component, wherein reallocation of the process load portion is triggered by a first monitored temperature reading and serves to lower the amount of energy generated at any unit area of the component over a unit of time.
22. The system of claim 21, further comprising:
means for reallocating a process load portion from the second processing area of the thermal energy generating component to the first processing area of the thermal energy generating component, wherein reallocation of the process load portion is triggered by a second monitored temperature reading.
23. The system of claim 22, wherein the second monitored temperature reading indicates that the temperature of the first processing area has cooled relative to the first monitored temperature reading.
24. The system of claim 21 , wherein the reallocated process load portion equals about one-hundred percent of the process load in the first processing area.
25. The system of claim 21, wherein the first processing area is a sub-processor within a core which also contains the second processing area.
26. The system of claim 21, wherein the first processing area has an associated power density that exceeds the power density associated with the second processing area.
27. The system of claim 21, further comprising:
means for defining a plurality of thermal states, wherein each thermal state contains a range of temperatures; and
wherein the first monitored temperature reading indicates that the temperature of the thermal energy generating component has increased from a temperature contained in a first thermal state to a temperature contained in a second thermal state.
28. The system of claim 27, further comprising:
means for reallocating a process load portion from the second processing area of the thermal energy generating component to the first processing area of the thermal energy generating component, wherein reallocation of the process load portion is triggered by the second monitored temperature reading and the second monitored temperature reading indicates that the temperature of the thermal energy generating component has decreased from a temperature contained in the second thermal state to a temperature contained in the first thermal state.
29. The system of claim 27, wherein the amount of load reallocated from the first processing area to the second processing area varies according to the thermal state which contains the first monitored temperature reading.
30. The system of claim 27, further comprising means for increasing the rate of monitoring from the first rate to a second rate, wherein the increased rate of monitoring is triggered by the first monitored temperature reading.
31. A computer program product comprising a computer usable medium having a computer readable program code embodied therein, said computer readable program code adapted to be executed to implement a method for managing thermal energy generation in a portable computing device, said method comprising:
monitoring, at a first rate, temperature readings generated by a temperature sensor placed proximate to a thermal energy generating component of a chip in a portable computing device, wherein the temperature readings correlate to a process load within the thermal energy generating component; and
based on a first monitored temperature reading, reallocating a process load portion from a first processing area of the thermal energy generating component to a second processing area of the thermal energy generating component, wherein reallocation of the process load portion serves to lower the amount of energy generated at any unit area of the component over a unit of time.
32. The computer program product of claim 31 , wherein the program code implementing the method further comprises:
based on a second monitored temperature reading, reallocating a process load portion from the second processing area of the thermal energy generating component to the first processing area of the thermal energy generating component.
33. The computer program product of claim 32, wherein the second monitored temperature reading indicates that the temperature of the first processing area has cooled relative to the first monitored temperature reading.
34. The computer program product of claim 31 , wherein the reallocated process load portion equals about one-hundred percent of the process load in the first processing area.
35. The computer program product of claim 31 , wherein the first processing area is a sub-processor within a core which also contains the second processing area.
36. The computer program product of claim 31 , wherein the first processing area has an associated power density that exceeds the power density associated with the second processing area.
37. The computer program product of claim 31 , wherein the program code implementing the method further comprises:
defining a plurality of thermal states, wherein each thermal state contains a range of temperatures; and
wherein the first monitored temperature reading indicates that the temperature of the thermal energy generating component has increased from a temperature contained in a first thermal state to a temperature contained in a second thermal state.
38. The computer program product of claim 37, wherein the program code implementing the method further comprises:
based on a second monitored temperature reading, reallocating a process load portion from the second processing area of the thermal energy generating component to the first processing area of the thermal energy generating component, wherein the second monitored temperature reading indicates that the temperature of the thermal energy generating component has decreased from a temperature contained in the second thermal state to a temperature contained in the first thermal state.
39. The computer program product of claim 37, wherein the amount of load to be reallocated from the first processing area to the second processing area varies according to the thermal state which contains the first monitored temperature reading.
40. The computer program product of claim 37, wherein the program code implementing the method further comprises increasing the rate of monitoring from the first rate to a second rate, wherein the increased rate of monitoring is triggered by the first monitored temperature reading.
PCT/US2012/033192 2011-04-22 2012-04-12 Method and system for thermal load management in a portable computing device WO2012145212A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201280019740.9A CN103582857B (en) 2011-04-22 2012-04-12 Thermal Load Management in Portable Computing Devices
EP12716927.4A EP2699977A2 (en) 2011-04-22 2012-04-12 Thermal load management in a portable computing device
KR1020137030978A KR101529419B1 (en) 2011-04-22 2012-04-12 Thermal load management in a portable computing device
JP2014506456A JP6059204B2 (en) 2011-04-22 2012-04-12 Thermal load management in portable computing devices

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201161478175P 2011-04-22 2011-04-22
US61/478,175 2011-04-22
US13/197,171 US8942857B2 (en) 2011-04-22 2011-08-03 Method and system for thermal load management in a portable computing device
US13/197,171 2011-08-03

Publications (2)

Publication Number Publication Date
WO2012145212A2 true WO2012145212A2 (en) 2012-10-26
WO2012145212A3 WO2012145212A3 (en) 2013-03-28

Family

ID=47021953

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/033192 WO2012145212A2 (en) 2011-04-22 2012-04-12 Method and system for thermal load management in a portable computing device

Country Status (6)

Country Link
US (1) US8942857B2 (en)
EP (1) EP2699977A2 (en)
JP (1) JP6059204B2 (en)
KR (1) KR101529419B1 (en)
CN (1) CN103582857B (en)
WO (1) WO2012145212A2 (en)

Families Citing this family (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8768666B2 (en) * 2011-01-06 2014-07-01 Qualcomm Incorporated Method and system for controlling thermal load distribution in a portable computing device
US9047067B2 (en) * 2011-04-22 2015-06-02 Qualcomm Incorporated Sensorless detection and management of thermal loading in a multi-processor wireless device
US8718835B2 (en) * 2011-06-17 2014-05-06 Microsoft Corporation Optimized temperature-driven device cooling
CN103376869B (en) * 2012-04-28 2016-11-23 华为技术有限公司 A kind of temperature feedback control system and method for DVFS
WO2013177765A1 (en) * 2012-05-30 2013-12-05 Intel Corporation Runtime dispatching among heterogeneous group of processors
KR102038427B1 (en) * 2012-09-28 2019-10-31 삼성전자 주식회사 A Method For Voltage Control based on Temperature and Electronic Device supporting the same
US9946319B2 (en) * 2012-11-20 2018-04-17 Advanced Micro Devices, Inc. Setting power-state limits based on performance coupling and thermal coupling between entities in a computing device
KR101454219B1 (en) * 2012-11-27 2014-10-24 포항공과대학교 산학협력단 Method of power management for graphic processing unit and system thereof
CN104871109B (en) * 2012-12-17 2017-08-04 惠普发展公司,有限责任合伙企业 Touch portable computing device based on temperature
KR20140080058A (en) * 2012-12-20 2014-06-30 삼성전자주식회사 Load balancing method for multicore and mobile terminal
US9342443B2 (en) * 2013-03-15 2016-05-17 Micron Technology, Inc. Systems and methods for memory system management based on thermal information of a memory system
US20140344827A1 (en) * 2013-05-16 2014-11-20 Nvidia Corporation System, method, and computer program product for scheduling a task to be performed by at least one processor core
US9158358B2 (en) 2013-06-04 2015-10-13 Qualcomm Incorporated System and method for intelligent multimedia-based thermal power management in a portable computing device
US9323318B2 (en) 2013-06-11 2016-04-26 Microsoft Technology Licensing, Llc Scenario power management
KR102076824B1 (en) * 2013-06-28 2020-02-13 삼성디스플레이 주식회사 Protection Circuit, Circuit Protection Method Using the same and Display Device
US9495491B2 (en) * 2014-03-14 2016-11-15 Microsoft Technology Licensing, Llc Reliability aware thermal design
US10082847B2 (en) * 2014-04-01 2018-09-25 Qualcomm Incorporated Method and system for optimizing performance of a PCD while mitigating thermal generation
US10042402B2 (en) * 2014-04-07 2018-08-07 Google Llc Systems and methods for thermal management of a chassis-coupled modular mobile electronic device
US9582012B2 (en) * 2014-04-08 2017-02-28 Qualcomm Incorporated Energy efficiency aware thermal management in a multi-processor system on a chip
US10095286B2 (en) 2014-05-30 2018-10-09 Apple Inc. Thermally adaptive quality-of-service
US9530174B2 (en) * 2014-05-30 2016-12-27 Apple Inc. Selective GPU throttling
US10203746B2 (en) * 2014-05-30 2019-02-12 Apple Inc. Thermal mitigation using selective task modulation
CN104049716B (en) * 2014-06-03 2017-01-25 中国科学院计算技术研究所 Computer energy-saving method and system combined with temperature sensing
US20160161959A1 (en) * 2014-06-12 2016-06-09 Mediatek Inc. Thermal management method and electronic system with thermal management mechanism
KR102329475B1 (en) * 2014-08-27 2021-11-19 삼성전자주식회사 Apparatus and Method of controlling rendering quality
KR102210770B1 (en) * 2014-09-02 2021-02-02 삼성전자주식회사 Semiconductor device, semiconductor system and method for controlling the same
US9569221B1 (en) * 2014-09-29 2017-02-14 Amazon Technologies, Inc. Dynamic selection of hardware processors for stream processing
US9582052B2 (en) * 2014-10-30 2017-02-28 Qualcomm Incorporated Thermal mitigation of multi-core processor
US10061331B2 (en) * 2015-01-22 2018-08-28 Qualcomm Incorporated Systems and methods for detecting thermal runaway
US9785209B2 (en) 2015-03-31 2017-10-10 Qualcomm Incorporated Thermal management in a computing device based on workload detection
CN105045359A (en) * 2015-07-28 2015-11-11 深圳市万普拉斯科技有限公司 Heat dissipation control method and apparatus
US10332230B2 (en) 2015-08-31 2019-06-25 Qualcomm Incorporated Characterizing GPU workloads and power management using command stream hinting
US9749740B2 (en) 2015-11-17 2017-08-29 Motorola Solutions, Inc. Method and apparatus for expanded temperature operation of a portable communication device
US20170147355A1 (en) * 2015-11-24 2017-05-25 Le Holdings (Beijing) Co., Ltd. Method and system for accelerating intelligent terminal boot speed
CN105342636A (en) * 2015-12-08 2016-02-24 苏州波影医疗技术有限公司 Temperature control system and method for detector system of multi-layer X-ray CT system
US10168752B2 (en) * 2016-03-08 2019-01-01 Qualcomm Incorporated Systems and methods for determining a sustained thermal power envelope comprising multiple heat sources
US9817697B2 (en) * 2016-03-25 2017-11-14 International Business Machines Corporation Thermal-and spatial-aware task scheduling
US10175731B2 (en) 2016-06-17 2019-01-08 Microsoft Technology Licensing, Llc Shared cooling for thermally connected components in electronic devices
EP3264268A1 (en) * 2016-06-29 2018-01-03 Intel Corporation Distributed processing qos algorithm for system performance optimization under thermal constraints
US11175708B2 (en) * 2016-07-12 2021-11-16 American Megatrends International, Llc Thermal simulation for management controller development projects
US9747139B1 (en) * 2016-10-19 2017-08-29 International Business Machines Corporation Performance-based multi-mode task dispatching in a multi-processor core system for high temperature avoidance
US9753773B1 (en) 2016-10-19 2017-09-05 International Business Machines Corporation Performance-based multi-mode task dispatching in a multi-processor core system for extreme temperature avoidance
US11551990B2 (en) 2017-08-11 2023-01-10 Advanced Micro Devices, Inc. Method and apparatus for providing thermal wear leveling
US11742038B2 (en) * 2017-08-11 2023-08-29 Advanced Micro Devices, Inc. Method and apparatus for providing wear leveling
US11644834B2 (en) * 2017-11-10 2023-05-09 Nvidia Corporation Systems and methods for safe and reliable autonomous vehicles
EP3547076A1 (en) * 2018-03-28 2019-10-02 Advanced Digital Broadcast S.A. System and method for adjusting performance of components of a multi-component system
CN110968415B (en) * 2018-09-29 2022-08-05 Oppo广东移动通信有限公司 Scheduling method and device of multi-core processor and terminal
JP7172625B2 (en) * 2019-01-16 2022-11-16 トヨタ自動車株式会社 Information processing equipment
CN110333933A (en) * 2019-07-01 2019-10-15 华南理工大学 A kind of HPL computation model emulation mode
CN110794949A (en) * 2019-09-27 2020-02-14 苏州浪潮智能科技有限公司 Power consumption reduction method and system for automatically allocating computing resources based on component temperature
US20210223805A1 (en) * 2020-12-23 2021-07-22 Intel Corporation Methods and apparatus to reduce thermal fluctuations in semiconductor processors
US11860067B2 (en) * 2021-01-19 2024-01-02 Nvidia Corporation Thermal test vehicle
CN113616227B (en) * 2021-09-18 2024-05-28 明峰医疗系统股份有限公司 Detector temperature control system and method
CN115237179B (en) * 2022-09-22 2023-01-20 之江实验室 Intelligent temperature control management circuit based on machine learning
CN117215394B (en) * 2023-11-07 2024-01-23 北京数渡信息科技有限公司 On-chip temperature and energy consumption control device for multi-core processor
CN117369603B (en) * 2023-12-05 2024-03-22 广东迅扬科技股份有限公司 Cabinet heat dissipation control system

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7822996B2 (en) 1995-12-07 2010-10-26 Texas Instruments Incorporated Method for implementing thermal management in a processor and/or apparatus and/or system employing the same
US5940785A (en) 1996-04-29 1999-08-17 International Business Machines Corporation Performance-temperature optimization by cooperatively varying the voltage and frequency of a circuit
US7086058B2 (en) 2002-06-06 2006-08-01 International Business Machines Corporation Method and apparatus to eliminate processor core hot spots
JP3830491B2 (en) * 2004-03-29 2006-10-04 株式会社ソニー・コンピュータエンタテインメント Processor, multiprocessor system, processor system, information processing apparatus, and temperature control method
US7360102B2 (en) 2004-03-29 2008-04-15 Sony Computer Entertainment Inc. Methods and apparatus for achieving thermal management using processor manipulation
JP3781758B2 (en) * 2004-06-04 2006-05-31 株式会社ソニー・コンピュータエンタテインメント Processor, processor system, temperature estimation device, information processing device, and temperature estimation method
JP3805344B2 (en) * 2004-06-22 2006-08-02 株式会社ソニー・コンピュータエンタテインメント Processor, information processing apparatus and processor control method
US7739527B2 (en) 2004-08-11 2010-06-15 Intel Corporation System and method to enable processor management policy in a multi-processor environment
US8806228B2 (en) 2006-07-13 2014-08-12 International Business Machines Corporation Systems and methods for asymmetrical performance multi-processors
US7584369B2 (en) 2006-07-26 2009-09-01 International Business Machines Corporation Method and apparatus for monitoring and controlling heat generation in a multi-core processor
US20080115010A1 (en) 2006-11-15 2008-05-15 Rothman Michael A System and method to establish fine-grained platform control
JP2008157739A (en) * 2006-12-22 2008-07-10 Toshiba Corp Information processor and its starting method
US20100073068A1 (en) 2008-09-22 2010-03-25 Hanwoo Cho Functional block level thermal control
US8171325B2 (en) 2008-12-03 2012-05-01 International Business Machines Corporation Computing component and environment mobility
WO2010112045A1 (en) 2009-04-02 2010-10-07 Siemens Aktiengesellschaft Method and device for energy-efficient load distribution
JP4585598B1 (en) * 2009-06-30 2010-11-24 株式会社東芝 Information processing device
US8839012B2 (en) 2009-09-08 2014-09-16 Advanced Micro Devices, Inc. Power management in multi-GPU systems
US20110138395A1 (en) 2009-12-08 2011-06-09 Empire Technology Development Llc Thermal management in multi-core processor
CN102004673A (en) * 2010-11-29 2011-04-06 中兴通讯股份有限公司 Processing method and system of multi-core processor load balancing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None

Also Published As

Publication number Publication date
US20120271481A1 (en) 2012-10-25
JP6059204B2 (en) 2017-01-11
CN103582857A (en) 2014-02-12
US8942857B2 (en) 2015-01-27
KR20140002072A (en) 2014-01-07
KR101529419B1 (en) 2015-06-16
CN103582857B (en) 2017-06-23
EP2699977A2 (en) 2014-02-26
WO2012145212A3 (en) 2013-03-28
JP2014516443A (en) 2014-07-10

Similar Documents

Publication Publication Date Title
US8942857B2 (en) Method and system for thermal load management in a portable computing device
US9442774B2 (en) Thermally driven workload scheduling in a heterogeneous multi-processor system on a chip
EP2962169B1 (en) System and method for thermal management in a portable computing device using thermal resistance values to predict optimum power levels
EP2758852B1 (en) System and method for managing thermal energy generation in a heterogeneous multi-core processor
EP2867742B1 (en) System and method for adaptive thermal management in a portable computing device
US8768666B2 (en) Method and system for controlling thermal load distribution in a portable computing device
EP2766788B1 (en) System and method for determining thermal management policy from leakage current measurement
US9703336B2 (en) System and method for thermal management in a multi-functional portable computing device
US8996902B2 (en) Modal workload scheduling in a heterogeneous multi-processor system on a chip
US20130090888A1 (en) System and method for proximity based thermal management of mobile device
EP2729859A2 (en) Method and system for preempting thermal load by proactive load steering
WO2013043349A1 (en) On-chip thermal management techniques using inter-processor time dependent power density data for identification of thermal aggressors

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12716927

Country of ref document: EP

Kind code of ref document: A2

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
REEP Request for entry into the european phase

Ref document number: 2012716927

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2012716927

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2014506456

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20137030978

Country of ref document: KR

Kind code of ref document: A