WO2012069881A1 - Method and apparatus for managing power in a multi-core processor - Google Patents

Method and apparatus for managing power in a multi-core processor Download PDF

Info

Publication number
WO2012069881A1
WO2012069881A1 PCT/IB2010/055416 IB2010055416W WO2012069881A1 WO 2012069881 A1 WO2012069881 A1 WO 2012069881A1 IB 2010055416 W IB2010055416 W IB 2010055416W WO 2012069881 A1 WO2012069881 A1 WO 2012069881A1
Authority
WO
WIPO (PCT)
Prior art keywords
core
cores
processing cores
processing
enabled
Prior art date
Application number
PCT/IB2010/055416
Other languages
French (fr)
Inventor
Michael Priel
Anton Rozen
Leonid Smolyansky
Original Assignee
Freescale Semiconductor, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Freescale Semiconductor, Inc. filed Critical Freescale Semiconductor, Inc.
Priority to EP10860047.9A priority Critical patent/EP2643741A4/en
Priority to CN201080070336.5A priority patent/CN103229123B/en
Priority to PCT/IB2010/055416 priority patent/WO2012069881A1/en
Priority to US13/989,280 priority patent/US9335805B2/en
Publication of WO2012069881A1 publication Critical patent/WO2012069881A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/324Power saving characterised by the action undertaken by lowering clock frequency
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/3296Power saving characterised by the action undertaken by lowering the supply or operating voltage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5094Allocation of resources, e.g. of the central processing unit [CPU] where the allocation takes into account power or heat criteria
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • This invention relates to data processing systems in general, and in particular to an improved apparatus and method for managing power in a multi-core processor.
  • Data processing systems such as PCs, mobile tablets, smart phones, and the like, often comprise a multi-core design, typically in the form of a multi-core processor.
  • the multiple cores may also be formed as part of a System-on-Chip (SoC).
  • SoC System-on-Chip
  • High-frequency multi-core designs are faced with several technical issues that need to be over come. These include suffering from high power utilisation (and dissipation) when all cores are operating at full speed and issues arising from running both software optimised for multi-core environments and software non-optimised for multi-core environments.
  • the present invention provides a method and apparatus for managing power in a multi-core data processing system having two or more processing cores as described in the accompanying claims.
  • Figure 1 schematically shows a block chart of a first example of a data processing system having a multi-core processor according to an embodiment of the present invention
  • Figure 2 schematically shows a block chart of a second example of a data processing system having a SoC multi-core processor according to an embodiment of the present invention
  • Figure 3 graphically shows a simplified overview of how power consumption of cores within a multi-core system varies according to the number of cores enabled
  • Figure 4 graphically shows how power consumption of cores within a multi-core system varies according to the number of cores enabled, in an exemplary high leakage current scenario
  • Figure 5 graphically shows how power consumption of cores within a multi-core system varies according to the number of cores enabled, in an exemplary low split-ability application scenario
  • Figure 6 shows a high level schematic flow diagram of the method according to an embodiment of the present invention.
  • Figure 1 schematically shows a first example of a data processing system 100a to which embodiments of the present invention may apply. It is a simplified schematic diagram of a typical desktop computer configuration having a multi-core central processing unit (CPU) 1 10 having four separate processing cores 1 14, and including a level 2 cache memory 1 13 and a core management entity 1 12.
  • the core management entity 1 12 directs the utilisation of the cores during data processing, including but not limited to adjusting the operating frequency/power of each core individually, individually enabling/disabling the cores, and the like, with regard to operating characteristics of both the hardware and software being used, as discussed in more detail below.
  • the multi-core CPU 1 10 is connected to a North/South bridge chipset 120 via interface 125.
  • the North/South bridge chipset 120 acts as a central hub, to connect the different electronic components of the overall data processing system 100a together, for example, the main external system memory 130, discrete graphics processing unit (GPU) 140, and external connections such as Universal Serial Bus (USB) 121 , Audio Input/Output (I/O) 122, IEEE 1394b 123, system interconnect (e.g. PCIe, and the like) 124, and in particular to connect them all to the CPU 1 10.
  • USB Universal Serial Bus
  • I/O Audio Input/Output
  • IEEE 1394b IEEE 1394b
  • system interconnect e.g. PCIe, and the like
  • main external system memory 130 may connect to the North/South bridge chipset 120 through external memory interface 135, or, alternatively, the CPU 1 10 may further include an integrated high speed external memory controller 1 1 1 for providing the high speed external memory interface 135b to the main external system memory 130.
  • the main external system memory 130 may not use the standard external memory interface 135 to the North/South bridge chipset 120.
  • the integration of the external memory controller 1 1 1 into the CPU 1 10 itself is seen as one way to increase overall system data throughput, as well as reducing component count and manufacturing costs whilst increasing reliability and the like.
  • the discrete graphics processing unit (GPU) 140 may connect to the North/South bridge chipset 120 through dedicated graphics interface 145 (e.g. Advanced Graphics Port - AGP), and to the display 150, via display interconnect 155 (e.g. Digital Video Interface (DVI), High Definition Multimedia Interface (HDMI), D-sub (analog), and the like).
  • the discrete GPU 140 may connect to the North/South bridge chipset 120 through some non-dedicated interface/interconnect, such as Peripheral Connection Interface (PCI - an established parallel interface standard) or PCI Express (PCIe - a newer, faster serialised interface standard), or other similarly capable interfaces (standard or non-standard).
  • PCI - Peripheral Connection Interface
  • PCIe PCI Express
  • peripheral devices may be connected through the other dedicated external connection interfaces such as USB 121 , Audio Input/Output interface 122, IEEE 1394a/b interface 123, Ethernet interface (not shown), main interconnect 124 (e.g. PCIe), or the like.
  • USB 121 USB 121
  • Audio Input/Output interface 122 Audio Input/Output interface 122
  • IEEE 1394a/b interface 123 Ethernet interface (not shown)
  • main interconnect 124 e.g. PCIe
  • Different examples of the present invention may have different sets of external connection interfaces present, i.e. the invention is not limited to any particular selection of external connection interfaces (or indeed internal connection interfaces).
  • SoC system-on-chip
  • Figure 2 schematically shows a second, more integrated, example of an embodiment of a data processing system 100b to which the present invention may apply.
  • the data processing system is simplified compared to Figure 1 , and it represents a commoditised mobile data processing system, such as a tablet computing device.
  • the mobile data processing system 100b of Figure 2 comprises a SoC multi-core CPU 1 10b having four processing cores 1 14, and including an integrated cache memory 1 13, integrated core management entity 1 12 (operating in a very similar way as for Fig. 1 ), integrated GPU 141 , integrated external memory interface 1 1 1 , and other integrated external interfaces 1 15.
  • all of the other parts of the system directly connect to the integrated CPU 1 10b, such as main external system memory 130 via interface connection 135, a touch display 151 via bi-directional interface 155, and a wireless module via USB interface 121.
  • the bi-directional touch interface 155 is operable to allow the display information to be sent to the touch display 151 , whilst also allowing the touch control input from the touch display 151 to be sent back to the CPU 1 10b via integrated GPU 141.
  • the wireless module also comprises an antenna 165, for receiving and transmitting wireless communication signals.
  • the mobile data processing system 100b, and in particular the external interfaces 1 15, may also include any other standardised internal or external connection interfaces (such as the IEEE1394b, Ethernet, Audio Input/Output interfaces of Figure 1 ).
  • Mobile devices in particular may also include some non-standard external connection interfaces (such as a proprietary docking station interface). This is all to say that the present invention is not limited by which types of internal/external connection interfaces are provided by or to the mobile data processing system 100b.
  • a single device 100b for use worldwide may be developed, with only certain portions being varied according to the needs/requirements of the intended sales locality (i.e. local, federal, state or other restrictions or requirements).
  • the wireless module may be interchanged according to local/national requirements.
  • an IEEE 802.1 1 ⁇ and Universal Mobile Telecommunications System (UMTS) wireless module 160 may be used in Europe, whereas an IEEE 802.1 1 ⁇ and Code Division Multiple Access (CDMA) wireless module may be used in the United States of America.
  • the respective wireless module 160 is connected through the same external connection interface, in this case the standardised USB connection 121 .
  • data processing system (100a or 100b) functions to implement a variety of data processing functions by executing a plurality of data processing instructions (i.e. the program code and content data) across the multiple processing cores 1 14.
  • the cache memory 1 13 is a temporary data store for frequently-used information that is needed by the multiple processing cores of the central processing unit 1 10/b.
  • the plurality of data processing instructions are often split into individually executable tasks, or threads, which interact with one another to carry out the overall function of the application(s)/program(s) being executed on the multi-core data processing system.
  • a single application may be formed from a plurality of tasks/threads. Equally, multiple single task applications may be executed concurrently.
  • the described method and apparatus may apply in either situation or mix of the two, thus reference to any of 'application, program, thread or task' may, in the main, be understood as a reference to any of these forms of computer code, where applicable.
  • Fig. 3 shows a simplified overview 300 of how power consumption of cores within a multi- core system varies according to the number of cores enabled.
  • the graph shows three use scenarios: single operating core 301 ; two operating cores 302; and three operating cores 303.
  • the frequency reduction can be done equally across all processing cores by a factor inversely proportional to the number of operating cores, i.e. they each have the same relative reduction factor applicable. This is because each processing core carries out less computation per unit of time, but there are more computational units (i.e. processing cores) working together to provide the total processing amount.
  • the processing frequency can halve, whereas if there are three processing cores, the processing frequency may be reduced by a factor of 3. This is shown in Fig.
  • power consumption of an integrated circuit is proportional to its operating frequency and the square of its operating voltage.
  • processing cores reduce their operating frequency, the amount of power they each actually require to carry out the necessary calculations drops proportionally.
  • the reduction of power required by a processing core is related to both the number of processing cores and their operating frequency, and may be incrementally better as the number of processing cores used increases.
  • This operating voltage power reduction in combination with the reduction in power due to the number of processing cores being used results in a lower combined total power bar level in both the two-core example (power bar level 305) and in the three- core example (power bar level 309)
  • N-Core Power n * (C * (V/b) 2 * F/n)
  • V operating voltage
  • a relative voltage reduction applicable due to corresponding frequency reduction for 2 core scenario
  • Another reason may be that there is a theoretical limit to the amount of power that may be saved in a particular processing core through voltage/frequency reduction, due to leakage currents in the multi-core integrated circuit, and in particular in that processing core/type of processing core. Leakage currents may result in wasted power in a processing core, and where the same particular core structure is used multiple times in a homogenous multi-core environment, the leakage currents will be largely similar for each processing core. Thus the leakage current losses increase proportionally to the number of processing cores in operation.
  • Leakage of the processing cores may be considerable when compared to the dynamic power usage requirements of the processing cores (i.e. the dynamically changing power load which is dependent on the core processing load) in certain constructions, and especially at higher operating temperatures.
  • T j transistor junction temperature
  • Fig. 4 shows the effects 400 of high leakage currents in a multi-core integrated circuit, and in particular how it affects the power usage of each enabled processing core in a homogenous multi- core processor.
  • each enabled processing core causes leakage current, hence leakage power consumption, and that the leakage power consumption scales less than the dynamic power consumption.
  • Fig. 5 shows how power consumption 500 of processing cores within a multi-core system varies according to the number of processing cores enabled in an exemplary low split-ability application scenario.
  • a single processing core 501
  • two processing cores 502
  • multiple (n) processing cores 503
  • "Low split-ability" is where an application does not split well across multiple processing cores, for example due to the application having a low level of inherent parallelism.
  • a high load may be caused on the main processing core (i.e. typically the first numbered processing core, which is the first to be used by any application.
  • main core i.e. typically the first numbered processing core, which is the first to be used by any application.
  • any other selection of "main core” may also be used).
  • the 'split-ability' of the particular application being run/executed on the multi-core processor which is the ability of the respective program(s) to be split over multiple processing cores for execution (i.e. it is the program(s)'s level of achievable/potential parallelism).
  • One approach to determining the "split-ability" is to run the program on a single-core enabled processor and then analyse what sort of tasks and what number of tasks are executed on the single processing core. If there are only a few (or even a single) task(s) running on the single core, then the inherent split-ability of that program is low. However, if there are many separate tasks running on the single core, there is a high degree of "split-ability" in the program.
  • This analysis stage may be either carried out on an actual single core processor, or a multi-core processor with all but one processing core disabled. In such a case, if the processing cores are heterogeneous, the method is further improved by carrying out multiple analysis stages, with each type of processing core being used at least once for one of the analysis stages, so that a complete picture of the overall capabilities of each type of processing core is known.
  • the "split-ability" analysis stage may be carried out offline or online, i.e. during compilation, or "on-the-fly" during actual execution of the respective application. However, the earlier the determination of application split-ability, the better the response may be. A similar 'split-ability' analysis may be carried out on a group of applications to be/being executed concurrently. In this case, whether the multi-core processor is heterogenous or homogenous may be more influential, as particular cores may be better or worse at carrying out certain tasks.
  • the thermal junction limit T j for the particular processing core e.g. in a heterogeneous multi-core processor environment
  • all cores e.g. in homogenous multi-core processor environment
  • An estimation of leakage power characteristics of individual e.g. in heterogeneous multi- core
  • multiple cores in multi-core processor e.g. in homogenous implementation
  • - Leakage power may be related to a particular part (i.e. the whole multi-core processor), or individual processing core, production parameters/characteristics and temperature (in particular, the (expected) operating temperature).
  • Figure 6 shows a simplified high level schematic flow diagram 600 of the method of managing power in a multi-core processing environment according to an embodiment of the present invention.
  • this figure is a state transition diagram with frequency on the Y axis, and the number of cores in use on the X axis.
  • a power management entity may then move the multi-core processor between the different states according to tested parameters affecting the multi-core processor, as described in more detail below.
  • 'single core enabled low performance' state 620
  • 'single core enabled high performance' state 630
  • 'N cores enabled high performance' state 640
  • ⁇ cores enabled low performance' state 650.
  • the number of cores enabled in each state is self evident.
  • 'low performance' means running the core(s) at low frequency/power use levels
  • 'high performance' means running the core(s) at high frequency/power use levels.
  • the core management entity 1 12 may determine that there is insufficient overall performance being provided. Hence, more performance is required, and may be provided (as discussed in more detail above) by either increasing individual processing core performance, or through simply enabling more processing cores.
  • each processing core has a relatively high leakage current characteristic, it is not considered realistic to provide increased performance through simply enabling/operating more (high leakage) processing cores, since the increased high leakage current (and hence high power waste) would be prohibitive for multi-core simultaneous running.
  • state 630 the state is transitioned 601 to one in which there is a single processing core enabled, but operating at (very) high performance (state 630).
  • the method may then transition from the 'single core enabled, high performance' state 630 to the 'single core enable, low performance' state 620 through the "low performance required" transition 602.
  • transition 603 when in state 620 ('single core, low performance'), if a similar 'higher performance required' transition occurs, but where the respective processing cores do not have high leakage current problems (i.e. transition 603), it may be beneficial to provide the increased performance by enabling more processing cores rather than making a single processing core operate individually at higher performance.
  • transition 604 there may be a transition to the ⁇ cores enabled, low performance' state 650.
  • the opposite transition 604 may occur when only low performance is required, so it is not necessary to maintain a larger number of running processing cores.
  • the core management entity 1 12 can transition the cores 605 in a multi-core processor to state 630 instead, i.e. to a state in which only a single processing core is enabled, but at high performance. This may also happen if, for example, the junction temperature (Tj) falls back below a target level, indicating a single high performance processing core may suffice.
  • the opposite transition 606 may occur, i.e.
  • T j maximum junction temperature
  • high(er) performance is still required. This may occur periodically where low leak currents are present.
  • the 'n-cores enabled, low performance' state 650 may be kept if uniform load is detected also.
  • an alternative approach to providing more performance is to transition to state 640 ( ⁇ cores enabled, high performance'), especially when the individual leakage currents for the processing cores is low and the application(s) being/to be executed is/are more uniform and easily parallelised.
  • state 640 ⁇ cores enabled, high performance'
  • Tj maximum specified junction temperature
  • this way of providing more performance is particularly useful.
  • junction temperature (Tj) whilst maintaining or increasing performance, then it is not possible to use any increases in individual processing core performance levels, and generally the increased performance can only be achieved by increasing the number of processing cores operating.
  • hotspots on the integrated circuit forming the multi-core processor may be avoided or spread out more evenly across the physical surface of the integrated circuit. It is also possible to use integrated temperature sensors within the multi-core processor itself to detect junction temperatures (Tj), and to locate hotspots so that they may be avoided (e.g. by spreading out the load to other processing cores within the multi-core processor). Also, processing may be moved to different equivalent processing cores (i.e. moving processing from an enabled processing core to a currently disabled but identical processing core, and disabling the originating processing core) if an originating processing core is getting too hot, for example, if it is immediately adjacent another operating processing core.
  • Tj junction temperatures
  • processing may be moved to different equivalent processing cores (i.e. moving processing from an enabled processing core to a currently disabled but identical processing core, and disabling the originating processing core) if an originating processing core is getting too hot, for example, if it is immediately adjacent another operating processing core.
  • transition 608 In the opposite direction, i.e. going from state 640 to state 650, similar uniform load, low leakage and junction temperature considerations apply, and are shown as transition 608.
  • a transition towards state 630 might be required, for example when a non-uniform load is detected, or there is high leakage per enabled processing core - transition 610.
  • a transition may occur from state 630 to state 640 when, for example, an overload occurs (where there is a requirement for processing power beyond what a single core is physically capable of providing - 609).
  • transitions between the two extreme states 620 and 640 i.e. 'single core enabled, low performance' and ⁇ cores enabled, high performance'
  • transitions between the two extreme states 620 and 640 typically occur through one of the other two, more intermediate states 630 and 650 and not directly.
  • a direct transition may equally be used when the processing power needs are extremely contrasting.
  • Fig. 6 shows and explains the main considerations for the power management entity when deciding on what processing cores to enable/disable, and what power and frequency levels to apply to the enabled cores.
  • the exact choice of power options to apply to the processing cores is variable according to the inherent characteristics of the hardware and software being used together, as well as the environment they are operating in. Pre-testing those characteristics through simulation or dry runs may be required for optimum multi-core power management.
  • examples show a method of managing power in a multi-core processing environment, and in particular within data processing systems having a multi-core processor therein.
  • the above described method and apparatus may be accomplished, for example, by adjusting the structure/operation of the data processing system, and in particular, the core power management entity 1 12 within the multi-core processor.
  • the invention may also be implemented in a computer program for running on a computer system, at least including code portions for performing steps of a method according to the invention when run on a programmable apparatus, such as a computer system or enabling a programmable apparatus to perform functions of a device or system according to the invention.
  • a computer program is a list of instructions such as a particular application program and/or an operating system.
  • the computer program may for instance include one or more of: a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system.
  • the computer program may be stored internally on computer readable storage medium or transmitted to the computer system via a computer readable transmission medium. All or some of the computer program may be provided on computer readable media permanently, removably or remotely coupled to an information processing system.
  • the computer readable media may include, for example and without limitation, any number of the following: magnetic storage media including disk and tape storage media; optical storage media such as compact disk media (e.g., CD ROM, CD R, etc.) and digital video disk storage media; nonvolatile memory storage media including semiconductor-based memory units such as FLASH memory, EEPROM, EPROM, ROM; ferromagnetic digital memories; MRAM; volatile storage media including registers, buffers or caches, main memory, RAM, etc; and data transmission media including computer networks, point- to-point telecommunication equipment, and carrier wave transmission media, just to name a few.
  • a computer process typically includes an executing (running) program or portion of a program, current program values and state information, and the resources used by the operating system to manage the execution of the process.
  • An operating system is the software that manages the sharing of the resources of a computer and provides programmers with an interface used to access those resources.
  • An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system.
  • the computer system may for instance include at least one processing unit, associated memory and a number of input/output (I/O) devices.
  • I/O input/output
  • the computer system processes information according to the computer program and produces resultant output information via I/O devices.
  • Computer readable media may be permanently, removably or remotely coupled to an information processing system such as data processing system 100a/b.
  • any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components.
  • any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality.
  • data processing systems 100a/b are circuitry located on a single integrated die or circuit or within a same device.
  • data processing systems 100a/b may include any number of separate integrated circuits or separate devices interconnected with each other.
  • power management entity 1 12 may be located on a same integrated circuit as CPU 1 10 or on a separate integrated circuit or located within another peripheral or slave discretely separate from other elements of data processing system 100a/b.
  • data processing system 100a/b or portions thereof may be soft or code representations of physical circuitry or of logical representations convertible into physical circuitry.
  • data processing system 100a/b may be embodied in a hardware description language of any appropriate type.
  • data processing system is a computer system such as a personal computer system 100a.
  • Other embodiments may include different types of computer systems, such as mobile data processing system 100b.
  • Data processing systems are information handling systems which can be designed to give independent computing power to one or more users. Data processing systems may be found in many forms including but not limited to mainframes, minicomputers, servers, workstations, personal computers, notepads, personal digital assistants, electronic games, automotive and other embedded systems, cell phones and various other wireless devices.
  • a typical computer system includes at least one processing unit, associated memory and a number of input/output (I/O) devices.
  • a data processing system processes information according to a program (i.e. application) and produces resultant output information via I/O devices.
  • a program is a list of instructions such as a particular application program and/or an operating system.
  • a computer program is typically stored internally on computer readable storage medium or transmitted to the computer system via a computer readable transmission medium, such as wireless module 160.
  • a computer process typically includes an executing (running) program or portion of a program, current program values and state information, and the resources used by the operating system to manage the execution of the process.
  • a parent process may spawn other, child processes to help perform the overall functionality of the parent process. Because the parent process specifically spawns the child processes to perform a portion of the overall functionality of the parent process, the functions performed by child processes (and grandchild processes, etc.) may sometimes be described as being performed by the parent process.
  • Coupled is not intended to be limited to a direct coupling or a mechanical coupling.
  • any reference signs placed between parentheses shall not be construed as limiting the claim.
  • the word 'comprising' does not exclude the presence of other elements or steps then those listed in a claim.
  • the terms "a” or "an,” as used herein, are defined as one or more than one.

Abstract

There is provided a method of managing power in a multi-core data processing system having two or more processing cores, comprising determining usage characteristics for the two or more processing cores within the multi-core processing unit, and dependent on the determined usage characteristics, adapting a frequency or voltage supplied to each of the two or more processing cores, and/or adapting enablement signals provided to each of the two or more processing cores. There is also provided an apparatus for carrying out the disclosed method.

Description

Title: METHOD AND APPARATUS FOR MANAGING POWER IN A MULTI-CORE PROCESSOR
Description Field of the invention
This invention relates to data processing systems in general, and in particular to an improved apparatus and method for managing power in a multi-core processor.
Background of the invention
Data processing systems, such as PCs, mobile tablets, smart phones, and the like, often comprise a multi-core design, typically in the form of a multi-core processor. The multiple cores may also be formed as part of a System-on-Chip (SoC).
High-frequency multi-core designs are faced with several technical issues that need to be over come. These include suffering from high power utilisation (and dissipation) when all cores are operating at full speed and issues arising from running both software optimised for multi-core environments and software non-optimised for multi-core environments.
Summary of the invention
The present invention provides a method and apparatus for managing power in a multi-core data processing system having two or more processing cores as described in the accompanying claims.
Specific embodiments of the invention are set forth in the dependent claims.
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
Brief description of the drawings
Further details, aspects and embodiments of the invention will be described, by way of example only, with reference to the drawings. In the drawings, like reference numbers are used to identify like or functionally similar elements. Elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale.
Figure 1 schematically shows a block chart of a first example of a data processing system having a multi-core processor according to an embodiment of the present invention;
Figure 2 schematically shows a block chart of a second example of a data processing system having a SoC multi-core processor according to an embodiment of the present invention;
Figure 3 graphically shows a simplified overview of how power consumption of cores within a multi-core system varies according to the number of cores enabled;
Figure 4 graphically shows how power consumption of cores within a multi-core system varies according to the number of cores enabled, in an exemplary high leakage current scenario; Figure 5 graphically shows how power consumption of cores within a multi-core system varies according to the number of cores enabled, in an exemplary low split-ability application scenario;
Figure 6 shows a high level schematic flow diagram of the method according to an embodiment of the present invention.
Detailed description of the preferred embodiments
Because the illustrated embodiments of the present invention may, for the most part, be implemented using electronic components and circuits known to those skilled in the art, details will not be explained in any greater extent than that considered necessary for the understanding and appreciation of the underlying concepts of the present invention and in order not to obfuscate or distract from the teachings of the present invention.
Figure 1 schematically shows a first example of a data processing system 100a to which embodiments of the present invention may apply. It is a simplified schematic diagram of a typical desktop computer configuration having a multi-core central processing unit (CPU) 1 10 having four separate processing cores 1 14, and including a level 2 cache memory 1 13 and a core management entity 1 12. The core management entity 1 12 directs the utilisation of the cores during data processing, including but not limited to adjusting the operating frequency/power of each core individually, individually enabling/disabling the cores, and the like, with regard to operating characteristics of both the hardware and software being used, as discussed in more detail below.
The multi-core CPU 1 10 is connected to a North/South bridge chipset 120 via interface 125. The North/South bridge chipset 120 acts as a central hub, to connect the different electronic components of the overall data processing system 100a together, for example, the main external system memory 130, discrete graphics processing unit (GPU) 140, and external connections such as Universal Serial Bus (USB) 121 , Audio Input/Output (I/O) 122, IEEE 1394b 123, system interconnect (e.g. PCIe, and the like) 124, and in particular to connect them all to the CPU 1 10.
In the example shown in figure 1 , main external system memory 130 (e.g. DDR random access memory) may connect to the North/South bridge chipset 120 through external memory interface 135, or, alternatively, the CPU 1 10 may further include an integrated high speed external memory controller 1 1 1 for providing the high speed external memory interface 135b to the main external system memory 130. In such a situation, the main external system memory 130 may not use the standard external memory interface 135 to the North/South bridge chipset 120. The integration of the external memory controller 1 1 1 into the CPU 1 10 itself is seen as one way to increase overall system data throughput, as well as reducing component count and manufacturing costs whilst increasing reliability and the like.
The discrete graphics processing unit (GPU) 140 may connect to the North/South bridge chipset 120 through dedicated graphics interface 145 (e.g. Advanced Graphics Port - AGP), and to the display 150, via display interconnect 155 (e.g. Digital Video Interface (DVI), High Definition Multimedia Interface (HDMI), D-sub (analog), and the like). In other examples, the discrete GPU 140 may connect to the North/South bridge chipset 120 through some non-dedicated interface/interconnect, such as Peripheral Connection Interface (PCI - an established parallel interface standard) or PCI Express (PCIe - a newer, faster serialised interface standard), or other similarly capable interfaces (standard or non-standard).
Other peripheral devices may be connected through the other dedicated external connection interfaces such as USB 121 , Audio Input/Output interface 122, IEEE 1394a/b interface 123, Ethernet interface (not shown), main interconnect 124 (e.g. PCIe), or the like. Different examples of the present invention may have different sets of external connection interfaces present, i.e. the invention is not limited to any particular selection of external connection interfaces (or indeed internal connection interfaces).
The integration of interfaces previously found within the North/South bridge chipsets 120 (or other discreet portions of the overall system) into the central processing unit 1 10 itself has been an increasing trend (producing so called "system-on-chip" (SoC) designs). This is because integrating more traditionally discrete components into the main CPU 110 reduces manufacturing costs, fault rates, power usage, size of end device, and the like.
Figure 2 schematically shows a second, more integrated, example of an embodiment of a data processing system 100b to which the present invention may apply. In this example, the data processing system is simplified compared to Figure 1 , and it represents a commoditised mobile data processing system, such as a tablet computing device.
The mobile data processing system 100b of Figure 2 comprises a SoC multi-core CPU 1 10b having four processing cores 1 14, and including an integrated cache memory 1 13, integrated core management entity 1 12 (operating in a very similar way as for Fig. 1 ), integrated GPU 141 , integrated external memory interface 1 1 1 , and other integrated external interfaces 1 15. In this case, all of the other parts of the system directly connect to the integrated CPU 1 10b, such as main external system memory 130 via interface connection 135, a touch display 151 via bi-directional interface 155, and a wireless module via USB interface 121. The bi-directional touch interface 155 is operable to allow the display information to be sent to the touch display 151 , whilst also allowing the touch control input from the touch display 151 to be sent back to the CPU 1 10b via integrated GPU 141.
The wireless module also comprises an antenna 165, for receiving and transmitting wireless communication signals. The mobile data processing system 100b, and in particular the external interfaces 1 15, may also include any other standardised internal or external connection interfaces (such as the IEEE1394b, Ethernet, Audio Input/Output interfaces of Figure 1 ). Mobile devices in particular may also include some non-standard external connection interfaces (such as a proprietary docking station interface). This is all to say that the present invention is not limited by which types of internal/external connection interfaces are provided by or to the mobile data processing system 100b.
Typically, in such consumer/commoditised data processing systems, a single device 100b for use worldwide may be developed, with only certain portions being varied according to the needs/requirements of the intended sales locality (i.e. local, federal, state or other restrictions or requirements). For example, in the mobile data processing system 100b of Figure 2, the wireless module may be interchanged according to local/national requirements. For example, an IEEE 802.1 1 η and Universal Mobile Telecommunications System (UMTS) wireless module 160 may be used in Europe, whereas an IEEE 802.1 1 η and Code Division Multiple Access (CDMA) wireless module may be used in the United States of America. In either situation, the respective wireless module 160 is connected through the same external connection interface, in this case the standardised USB connection 121 .
Regardless of the form of the data processing system (100a or 100b), the way in which the multi-core processor operates is generally similar. In operation, data processing system (100a/b) functions to implement a variety of data processing functions by executing a plurality of data processing instructions (i.e. the program code and content data) across the multiple processing cores 1 14. The cache memory 1 13 is a temporary data store for frequently-used information that is needed by the multiple processing cores of the central processing unit 1 10/b. The plurality of data processing instructions are often split into individually executable tasks, or threads, which interact with one another to carry out the overall function of the application(s)/program(s) being executed on the multi-core data processing system. These tasks maybe executed on separate cores, depending on the workload of the overall data processing system. A single application may be formed from a plurality of tasks/threads. Equally, multiple single task applications may be executed concurrently. The described method and apparatus may apply in either situation or mix of the two, thus reference to any of 'application, program, thread or task' may, in the main, be understood as a reference to any of these forms of computer code, where applicable.
General use scenarios (i.e. use-cases) for multi-core data processing systems often don't require full use of all the available performance of the multi-core CPU. Therefore, the executing application(s)/program(s)/thread(s)/task(s) can be separated and load-balanced across multiple cores. Typically, there may be two or more options available for inter-core load balancing, including:
Running the application on a lower number of cores but each operating at a higher load, e.g. a single core operating at high frequency and high voltage;
Running the application on a higher number of cores but each at a lower load, e.g. several cores operating at low frequency and low voltage.
Fig. 3 shows a simplified overview 300 of how power consumption of cores within a multi- core system varies according to the number of cores enabled. The graph shows three use scenarios: single operating core 301 ; two operating cores 302; and three operating cores 303.
In a multi-core environment, if more processing cores are enabled, then to get a predetermined amount of computation done allows each processing core to reduce its operating frequency. In a homogenous multi-core system, the frequency reduction can be done equally across all processing cores by a factor inversely proportional to the number of operating cores, i.e. they each have the same relative reduction factor applicable. This is because each processing core carries out less computation per unit of time, but there are more computational units (i.e. processing cores) working together to provide the total processing amount. In a basic example, if there are twice as many processing cores available, the processing frequency can halve, whereas if there are three processing cores, the processing frequency may be reduced by a factor of 3. This is shown in Fig. 3 as the difference between the heights of the power bar level 304 for a single processor core scenario 301 , when compared to the two theoretical power bar levels 306 of the two core version 302, and the three theoretical power bar levels 312 of the three core version 303. In heterogeneous multi-core environments, a similar approach applies, but with variation in the relative changes in frequency allowed according to the relevant types of processing resources (i.e. core) provided.
Furthermore, at least partly due to physical properties of integrated circuits, power consumption of an integrated circuit is proportional to its operating frequency and the square of its operating voltage. Thus, as processing cores reduce their operating frequency, the amount of power they each actually require to carry out the necessary calculations drops proportionally. Thus, the reduction of power required by a processing core (i.e. power saving) is related to both the number of processing cores and their operating frequency, and may be incrementally better as the number of processing cores used increases. This operating voltage power reduction in combination with the reduction in power due to the number of processing cores being used, results in a lower combined total power bar level in both the two-core example (power bar level 305) and in the three- core example (power bar level 309)
From this understanding, it can be seen that since the power consumption is most effective on low voltages (because power consumption is proportional to the square of the voltage, see equations below), in lower power envelope scenarios, it may be justified to use more processing cores operating at lower frequency and voltage, than would otherwise be the case:
Single Core Power (301 )
Two Core Power (302) P2 = 2 * (C * (V/a)2 * F/2)
C * V2 * F * (1/a)2
Figure imgf000006_0001
N-Core Power = n * (C * (V/b)2 * F/n)
= C * V2 * F * (1/b)2
= P, * {Vb†
Where C = parasitic capacitances of all component of multi-core processor;
V = operating voltage;
F = operating frequency;
a = relative voltage reduction applicable due to corresponding frequency reduction for 2 core scenario;
b = relative voltage reduction applicable due to corresponding frequency reduction for n-core scenario. The above equation shows that as there are more processing cores operating, the frequency can reduce, resulting in less voltage required, resulting in doubly lower overall power consumption. Thus, it can be appreciated that 'artificially' spreading an application(s) across multiple processing cores may be preferable to running the same application(s) on a single processing core.
However, in real world scenarios, taking such a straight forward approach may result in an opposite outcome - i.e. more power is actually used/wasted. One reason for this to happen is because the particular application being executed/used may not split well over multiple processing cores. This is to say, it is an application that is not easily parallelized.
Another reason may be that there is a theoretical limit to the amount of power that may be saved in a particular processing core through voltage/frequency reduction, due to leakage currents in the multi-core integrated circuit, and in particular in that processing core/type of processing core. Leakage currents may result in wasted power in a processing core, and where the same particular core structure is used multiple times in a homogenous multi-core environment, the leakage currents will be largely similar for each processing core. Thus the leakage current losses increase proportionally to the number of processing cores in operation.
Leakage of the processing cores may be considerable when compared to the dynamic power usage requirements of the processing cores (i.e. the dynamically changing power load which is dependent on the core processing load) in certain constructions, and especially at higher operating temperatures.
Furthermore, there is an over-arching constraint to ensure that the multi-core processor never exceeds a predetermined transistor junction temperature (Tj), which is the maximum allowed temperature of the actual transistor silicon junctions, in to avoid transistor degradation through ion/electron migration.
Fig. 4 shows the effects 400 of high leakage currents in a multi-core integrated circuit, and in particular how it affects the power usage of each enabled processing core in a homogenous multi- core processor.
The total power consumption for a single processing core scenario 401 is the sum of the dynamic power component P-i and the leakage power component L-i (i.e. = + L-i ).
The total power consumption for a two-core scenario 402 is two times the sum of the dynamic power component P2 and the leakage power component L2 (i.e.∑2 = 2*(P2 + L2)).
The total power consumption for a n-core multi-core scenario 403 is n times the sum of the dynamic power component Pn and the leakage power component Ln (i.e.∑n = n*(Pn + Ln)).
It can be seen that each enabled processing core causes leakage current, hence leakage power consumption, and that the leakage power consumption scales less than the dynamic power consumption.
Fig. 5 shows how power consumption 500 of processing cores within a multi-core system varies according to the number of processing cores enabled in an exemplary low split-ability application scenario. In particular, there is shown the situation for a single processing core (501 ), two processing cores (502) and multiple (n) processing cores (503). "Low split-ability" is where an application does not split well across multiple processing cores, for example due to the application having a low level of inherent parallelism. In this sort of situation a high load may be caused on the main processing core (i.e. typically the first numbered processing core, which is the first to be used by any application. However, any other selection of "main core" may also be used). This may prevent a common frequency and voltage level from being assigned to the multi-core processor, since there is a disparity in operating levels required by each processing core - see examples 502 and 503 in particular. From the previous figure, we know that each enabled processing core causes leakage current, and hence increases total leakage power consumption.
Therefore, it is proposed that the decisions regarding the frequency/voltage scaling applied across a multi-core processor (including any decisions on a particular processing core's enable/disable status) may take into consideration one or more of the following additional new parameters:
- The 'split-ability' of the particular application being run/executed on the multi-core processor, which is the ability of the respective program(s) to be split over multiple processing cores for execution (i.e. it is the program(s)'s level of achievable/potential parallelism). One approach to determining the "split-ability" is to run the program on a single-core enabled processor and then analyse what sort of tasks and what number of tasks are executed on the single processing core. If there are only a few (or even a single) task(s) running on the single core, then the inherent split-ability of that program is low. However, if there are many separate tasks running on the single core, there is a high degree of "split-ability" in the program. This analysis stage may be either carried out on an actual single core processor, or a multi-core processor with all but one processing core disabled. In such a case, if the processing cores are heterogeneous, the method is further improved by carrying out multiple analysis stages, with each type of processing core being used at least once for one of the analysis stages, so that a complete picture of the overall capabilities of each type of processing core is known. The "split-ability" analysis stage may be carried out offline or online, i.e. during compilation, or "on-the-fly" during actual execution of the respective application. However, the earlier the determination of application split-ability, the better the response may be. A similar 'split-ability' analysis may be carried out on a group of applications to be/being executed concurrently. In this case, whether the multi-core processor is heterogenous or homogenous may be more influential, as particular cores may be better or worse at carrying out certain tasks.
- An estimation of the dynamic power characteristics of individual (e.g. in a heterogeneous multi-core processor environment) or multiple cores in multi-core processor (e.g. in a homogenous multi-core processor environment). This is to say, an estimation of how much power a single processing core uses vs how much power a multiple processing core implementation will take in terms of dynamic power usage, as explained with reference to Figures 3 - 5.
- The thermal junction limit Tj for the particular processing core (e.g. in a heterogeneous multi-core processor environment) or for all cores (e.g. in homogenous multi-core processor environment). - An estimation of leakage power characteristics of individual (e.g. in heterogeneous multi- core) or multiple cores in multi-core processor (e.g. in homogenous implementation), i.e. how much power a single core vs how much power a multiple core implementation will take in terms of leakage power usage, as explained with reference to Figures 3 - 5.
- Leakage power may be related to a particular part (i.e. the whole multi-core processor), or individual processing core, production parameters/characteristics and temperature (in particular, the (expected) operating temperature).
Figure 6 shows a simplified high level schematic flow diagram 600 of the method of managing power in a multi-core processing environment according to an embodiment of the present invention. In particular, this figure is a state transition diagram with frequency on the Y axis, and the number of cores in use on the X axis. A power management entity may then move the multi-core processor between the different states according to tested parameters affecting the multi-core processor, as described in more detail below.
In this simplified example, there are four main states within which the multi-core processor may reside: 'single core enabled, low performance' state 620; 'single core enabled, high performance' state 630; 'N cores enabled, high performance' state 640; and Ή cores enabled, low performance' state 650. The number of cores enabled in each state is self evident. Meanwhile, 'low performance' means running the core(s) at low frequency/power use levels, and 'high performance' means running the core(s) at high frequency/power use levels.
Taking the arbitrary starting point of being in state 620 (there being a single processing core enabled, operating at low performance), the core management entity 1 12 may determine that there is insufficient overall performance being provided. Hence, more performance is required, and may be provided (as discussed in more detail above) by either increasing individual processing core performance, or through simply enabling more processing cores.
However, if each processing core has a relatively high leakage current characteristic, it is not considered realistic to provide increased performance through simply enabling/operating more (high leakage) processing cores, since the increased high leakage current (and hence high power waste) would be prohibitive for multi-core simultaneous running.
Therefore, the state is transitioned 601 to one in which there is a single processing core enabled, but operating at (very) high performance (state 630).
In an opposite situation, i.e. where there is already too much performance available (and there is a high leakage current), the method may then transition from the 'single core enabled, high performance' state 630 to the 'single core enable, low performance' state 620 through the "low performance required" transition 602.
Alternatively, when in state 620 ('single core, low performance'), if a similar 'higher performance required' transition occurs, but where the respective processing cores do not have high leakage current problems (i.e. transition 603), it may be beneficial to provide the increased performance by enabling more processing cores rather than making a single processing core operate individually at higher performance. Thus, there may be a transition to the Ή cores enabled, low performance' state 650. The opposite transition 604 may occur when only low performance is required, so it is not necessary to maintain a larger number of running processing cores.
When in state 650 (Ή cores enabled, low performance'), if there subsequently occurs a situation where either the application(s) being executed/to be executed is/are of low split-ability (i.e. not easily parallelised) or the leakage current for each enabled processing core is high (e.g. the multi-core processor is operating in higher than expected temperatures), then the core management entity 1 12 can transition the cores 605 in a multi-core processor to state 630 instead, i.e. to a state in which only a single processing core is enabled, but at high performance. This may also happen if, for example, the junction temperature (Tj) falls back below a target level, indicating a single high performance processing core may suffice. The opposite transition 606 may occur, i.e. move from the 'single core enabled, high performance' state 630 to the Ή cores enabled, low performance' state 650, if for example, a maximum junction temperature (Tj) is reached and high(er) performance is still required. This may occur periodically where low leak currents are present. The 'n-cores enabled, low performance' state 650 may be kept if uniform load is detected also.
When in state 650, an alternative approach to providing more performance (especially when there is some form of overload detected - transition 607) is to transition to state 640 (Ή cores enabled, high performance'), especially when the individual leakage currents for the processing cores is low and the application(s) being/to be executed is/are more uniform and easily parallelised. Where the maximum specified junction temperature (Tj) has been reached by a multi-core processor, then this way of providing more performance is particularly useful. Where there is a need to reduce junction temperature (Tj) whilst maintaining or increasing performance, then it is not possible to use any increases in individual processing core performance levels, and generally the increased performance can only be achieved by increasing the number of processing cores operating.
There is an additional benefit to enabling more processing cores, in that hotspots on the integrated circuit forming the multi-core processor may be avoided or spread out more evenly across the physical surface of the integrated circuit. It is also possible to use integrated temperature sensors within the multi-core processor itself to detect junction temperatures (Tj), and to locate hotspots so that they may be avoided (e.g. by spreading out the load to other processing cores within the multi-core processor). Also, processing may be moved to different equivalent processing cores (i.e. moving processing from an enabled processing core to a currently disabled but identical processing core, and disabling the originating processing core) if an originating processing core is getting too hot, for example, if it is immediately adjacent another operating processing core.
In the opposite direction, i.e. going from state 640 to state 650, similar uniform load, low leakage and junction temperature considerations apply, and are shown as transition 608.
When in state 640 (Ή cores enabled, high performance'), a transition towards state 630 ('single core enabled, high performance') might be required, for example when a non-uniform load is detected, or there is high leakage per enabled processing core - transition 610. In the opposite direction, a transition may occur from state 630 to state 640 when, for example, an overload occurs (where there is a requirement for processing power beyond what a single core is physically capable of providing - 609).
Since it is usually preferable to take measured steps to match power usage of a multi-core processor to the processing requirement loads, transitions between the two extreme states 620 and 640 (i.e. 'single core enabled, low performance' and Ή cores enabled, high performance'), typically occur through one of the other two, more intermediate states 630 and 650 and not directly. However, a direct transition (not shown) may equally be used when the processing power needs are extremely contrasting.
The above description of Fig. 6 shows and explains the main considerations for the power management entity when deciding on what processing cores to enable/disable, and what power and frequency levels to apply to the enabled cores. The exact choice of power options to apply to the processing cores is variable according to the inherent characteristics of the hardware and software being used together, as well as the environment they are operating in. Pre-testing those characteristics through simulation or dry runs may be required for optimum multi-core power management.
Accordingly, examples show a method of managing power in a multi-core processing environment, and in particular within data processing systems having a multi-core processor therein.
The above described method and apparatus may be accomplished, for example, by adjusting the structure/operation of the data processing system, and in particular, the core power management entity 1 12 within the multi-core processor.
The invention may also be implemented in a computer program for running on a computer system, at least including code portions for performing steps of a method according to the invention when run on a programmable apparatus, such as a computer system or enabling a programmable apparatus to perform functions of a device or system according to the invention.
A computer program is a list of instructions such as a particular application program and/or an operating system. The computer program may for instance include one or more of: a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system.
The computer program may be stored internally on computer readable storage medium or transmitted to the computer system via a computer readable transmission medium. All or some of the computer program may be provided on computer readable media permanently, removably or remotely coupled to an information processing system. The computer readable media may include, for example and without limitation, any number of the following: magnetic storage media including disk and tape storage media; optical storage media such as compact disk media (e.g., CD ROM, CD R, etc.) and digital video disk storage media; nonvolatile memory storage media including semiconductor-based memory units such as FLASH memory, EEPROM, EPROM, ROM; ferromagnetic digital memories; MRAM; volatile storage media including registers, buffers or caches, main memory, RAM, etc; and data transmission media including computer networks, point- to-point telecommunication equipment, and carrier wave transmission media, just to name a few.
A computer process typically includes an executing (running) program or portion of a program, current program values and state information, and the resources used by the operating system to manage the execution of the process. An operating system (OS) is the software that manages the sharing of the resources of a computer and provides programmers with an interface used to access those resources. An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system.
The computer system may for instance include at least one processing unit, associated memory and a number of input/output (I/O) devices. When executing the computer program, the computer system processes information according to the computer program and produces resultant output information via I/O devices.
Computer readable media may be permanently, removably or remotely coupled to an information processing system such as data processing system 100a/b.
Some of the above examples of embodiments, as applicable, may be implemented in a variety of different information/data processing systems. For example, although the figures and the discussion thereof describe exemplary information processing architectures, these exemplary architectures are presented merely to provide a useful reference in discussing various aspects of the invention. Of course, the description of the architectures has been simplified for purposes of discussion, and it is just one of many different types of appropriate architectures that may be used in accordance with the invention. Those skilled in the art will recognize that the boundaries between logic blocks are merely illustrative and that alternative embodiments may merge logic blocks or circuit elements or impose an alternate decomposition of functionality upon various logic blocks or circuit elements.
Thus, it is to be understood that the architectures depicted herein are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In an abstract, but still definite sense, any arrangement of components to achieve the same functionality is effectively "associated" such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as "associated with" each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being "operably connected," or "operably coupled," to each other to achieve the desired functionality.
Furthermore, those skilled in the art will recognize that boundaries between the above described operations merely illustrative. The multiple operations may be combined into a single operation, a single operation may be distributed in additional operations and operations may be executed at least partially overlapping in time. Moreover, alternative embodiments may include multiple instances of a particular operation, and the order of operations may be altered in various other embodiments. Also for example, in some embodiments, the illustrated elements of data processing systems 100a/b are circuitry located on a single integrated die or circuit or within a same device. Alternatively, data processing systems 100a/b may include any number of separate integrated circuits or separate devices interconnected with each other. For example, power management entity 1 12 may be located on a same integrated circuit as CPU 1 10 or on a separate integrated circuit or located within another peripheral or slave discretely separate from other elements of data processing system 100a/b. Also for example, data processing system 100a/b or portions thereof may be soft or code representations of physical circuitry or of logical representations convertible into physical circuitry. As such, data processing system 100a/b may be embodied in a hardware description language of any appropriate type.
However, other modifications, variations and alternatives are also possible. The specifications and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense.
As discussed, in one embodiment, data processing system is a computer system such as a personal computer system 100a. Other embodiments may include different types of computer systems, such as mobile data processing system 100b. Data processing systems are information handling systems which can be designed to give independent computing power to one or more users. Data processing systems may be found in many forms including but not limited to mainframes, minicomputers, servers, workstations, personal computers, notepads, personal digital assistants, electronic games, automotive and other embedded systems, cell phones and various other wireless devices. A typical computer system includes at least one processing unit, associated memory and a number of input/output (I/O) devices.
A data processing system processes information according to a program (i.e. application) and produces resultant output information via I/O devices. A program is a list of instructions such as a particular application program and/or an operating system. A computer program is typically stored internally on computer readable storage medium or transmitted to the computer system via a computer readable transmission medium, such as wireless module 160. A computer process typically includes an executing (running) program or portion of a program, current program values and state information, and the resources used by the operating system to manage the execution of the process. A parent process may spawn other, child processes to help perform the overall functionality of the parent process. Because the parent process specifically spawns the child processes to perform a portion of the overall functionality of the parent process, the functions performed by child processes (and grandchild processes, etc.) may sometimes be described as being performed by the parent process.
The term "coupled," as used herein, is not intended to be limited to a direct coupling or a mechanical coupling.
In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word 'comprising' does not exclude the presence of other elements or steps then those listed in a claim. Furthermore, the terms "a" or "an," as used herein, are defined as one or more than one. Also, the use of introductory phrases such as "at least one" and "one or more" in the claims should not be construed to imply that the introduction of another claim element by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an." The same holds true for the use of definite articles. Unless stated otherwise, terms such as "first" and "second" are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements. The mere fact that certain measures are recited in mutually different claims does not indicate that a combination of these measures cannot be used to advantage.
In the foregoing specification, the invention has been described with reference to specific examples of embodiments of the invention. It will, however, be evident that various modifications and changes may be made therein without departing from the broader spirit and scope of the invention as set forth in the appended claims.

Claims

Claims
1 . A method of managing power in a multi-core data processing system having two or more processing cores, comprising:
determining usage characteristics for the two or more processing cores within the multi- core processing unit; and
dependent on the determined usage characteristics:
adapting a frequency or voltage supplied to each of the two or more processing cores; and/or
adapting enablement signals provided to each of the two or more processing cores.
2. The method of claim 1 , wherein the usage characteristics are dependent on any one or more of:
an ability of an application or set of applications to be or currently being executed by the multi-core data processing system to be split across multiple processing cores;
a dynamic power estimation of the multi-core data processing system;
a pre-determined junction temperature;
a leakage power estimation of the multi-core data processing system; and
a required level of performance.
3. The method of claim 1 or 2, wherein the leakage power estimation is based upon a production parameter of the multi-core data processing system or an operating temperature of the multi-core data processing system.
4. The method of any of claims 1 to 3, wherein adapting a frequency or voltage supplied to each of the two or more processing cores comprises either:
increasing an operating frequency and an operating voltage supplied to a respective one or more of the two or more processing cores within the multi-core data processing system; or
decreasing the operating frequency and the operating voltage supplied to a respective one or more of the two or more processing cores within the multi-core data processing system.
5. The method of any of claims 1 to 4, wherein adapting enablement signals provided to each of the two or more processing cores comprises either:
enabling one or more processing cores within the multi-core data processing system; or disabling one or more processing cores within the multi-core data processing system.
6. The method of claim 5, further comprising moving a thread from a previously enabled processing core to a previously disabled, newly enabled processing core and disabling the previously enabled processing core.
7. The method of any of claims 4 to 6, wherein adapting an operating frequency and an operating voltage supplied to each of the two or more processing cores and adapting enablement signals provided to each of the two or more processing cores comprises any one or more of the transitions shown in Figure 6.
8. The method of claim 7, wherein a number of enabled processing cores is decreased in response to a determination of there being:
a low ability of an application or applications to be or currently being executed by the multi- core data processing system to be split across multiple processing cores; and/or a high processing core leakage current; and/or
an excessive level of performance available.
9. The method of claim 7, wherein a number of enabled processing cores is increased in response to a determination of there being:
a high ability of an application or applications to be or currently being executed by the multi-core data processing system to be split across multiple processing cores; and/or a low processing core leakage current; and/or
an insufficient level of performance available; and/or
a maximum transistor junction temperature, Tj, has been reached.
10. The method of claim 8 or 9, wherein in response to a determination of an insufficient level of performance, the operating frequency and operating voltage of the enabled processing cores may be increased.
1 1. The method of claim 8 or 9, wherein in response to a determination of an excessive level of performance, the operating frequency and operating voltage of the enabled processing cores may be decreased.
12. The method of any of claims 4 to 1 1 , wherein increased performance is achieved through a suitable combination of increasing the number of enabled processing cores and increasing the enabled cores operating frequency and operating voltage.
13. The method of any of claims 4 to 12, wherein decreased performance is achieved through a suitable combination of decreasing the number of enabled processing cores and decreasing the enabled cores operating frequency and operating voltage.
14. The method of claim 12 or 13, wherein the number of processing cores enabled and the operating frequency and operating voltage of the enabled processing cores tends to a minimum to maintain a required level of performance.
15. An apparatus comprising:
a multi-core processing unit having two or more processing cores; and
a power management entity operably coupled to the two or more processing cores; wherein the power management entity is arranged to carry out any of method claims 1 to
14.
PCT/IB2010/055416 2010-11-25 2010-11-25 Method and apparatus for managing power in a multi-core processor WO2012069881A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP10860047.9A EP2643741A4 (en) 2010-11-25 2010-11-25 Method and apparatus for managing power in a multi-core processor
CN201080070336.5A CN103229123B (en) 2010-11-25 2010-11-25 The method and device of power is managed in polycaryon processor
PCT/IB2010/055416 WO2012069881A1 (en) 2010-11-25 2010-11-25 Method and apparatus for managing power in a multi-core processor
US13/989,280 US9335805B2 (en) 2010-11-25 2010-11-25 Method and apparatus for managing power in a multi-core processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2010/055416 WO2012069881A1 (en) 2010-11-25 2010-11-25 Method and apparatus for managing power in a multi-core processor

Publications (1)

Publication Number Publication Date
WO2012069881A1 true WO2012069881A1 (en) 2012-05-31

Family

ID=46145419

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2010/055416 WO2012069881A1 (en) 2010-11-25 2010-11-25 Method and apparatus for managing power in a multi-core processor

Country Status (4)

Country Link
US (1) US9335805B2 (en)
EP (1) EP2643741A4 (en)
CN (1) CN103229123B (en)
WO (1) WO2012069881A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9552046B2 (en) 2012-09-21 2017-01-24 Htc Corporation Performance management methods for electronic devices with multiple central processing units

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9086883B2 (en) * 2011-06-10 2015-07-21 Qualcomm Incorporated System and apparatus for consolidated dynamic frequency/voltage control
JP5643903B2 (en) * 2011-12-23 2014-12-17 インテル・コーポレーション Method and apparatus for efficient communication between caches in a hierarchical cache design
WO2014006450A1 (en) * 2012-07-03 2014-01-09 Freescale Semiconductor, Inc. Method and apparatus for managing a thermal budget of at least a part of a processing system
US9569279B2 (en) * 2012-07-31 2017-02-14 Nvidia Corporation Heterogeneous multiprocessor design for power-efficient and area-efficient computing
US9037889B2 (en) * 2012-09-28 2015-05-19 Intel Corporation Apparatus and method for determining the number of execution cores to keep active in a processor
JPWO2014061141A1 (en) * 2012-10-18 2016-09-05 トヨタ自動車株式会社 Parallel computing device
US9417925B2 (en) 2012-10-19 2016-08-16 Microsoft Technology Licensing, Llc Dynamic functionality partitioning
US9110670B2 (en) * 2012-10-19 2015-08-18 Microsoft Technology Licensing, Llc Energy management by dynamic functionality partitioning
US8884906B2 (en) * 2012-12-21 2014-11-11 Intel Corporation Offloading touch processing to a graphics processor
JP6051924B2 (en) * 2013-02-21 2016-12-27 富士通株式会社 Information processing apparatus control method, control program, and information processing apparatus
US9304573B2 (en) * 2013-06-21 2016-04-05 Apple Inc. Dynamic voltage and frequency management based on active processors
US9471132B2 (en) * 2013-09-27 2016-10-18 Intel Corporation Techniques for putting platform subsystems into a lower power state in parallel
US9563724B2 (en) * 2013-09-28 2017-02-07 International Business Machines Corporation Virtual power management multiprocessor system simulation
US9553822B2 (en) 2013-11-12 2017-01-24 Microsoft Technology Licensing, Llc Constructing virtual motherboards and virtual storage devices
US9541985B2 (en) 2013-12-12 2017-01-10 International Business Machines Corporation Energy efficient optimization in multicore processors under quality of service (QoS)/performance constraints
US8972760B1 (en) * 2013-12-20 2015-03-03 Futurewei Technologies, Inc. Method and apparatus for reducing power consumption in a mobile electronic device using a second launcher
US9606605B2 (en) 2014-03-07 2017-03-28 Apple Inc. Dynamic voltage margin recovery
KR102169692B1 (en) * 2014-07-08 2020-10-26 삼성전자주식회사 System on chip including multi-core processor and dynamic power management method thereof
US10542233B2 (en) * 2014-10-22 2020-01-21 Genetec Inc. System to dispatch video decoding to dedicated hardware resources
CN106293927A (en) * 2015-06-01 2017-01-04 联想(北京)有限公司 Control method and electronic equipment
US10459759B2 (en) * 2015-08-26 2019-10-29 Netapp, Inc. Migration between CPU cores
US9910700B2 (en) * 2015-08-26 2018-03-06 Netapp, Inc. Migration between CPU cores
US10831620B2 (en) * 2016-06-15 2020-11-10 International Business Machines Corporation Core pairing in multicore systems
US10503238B2 (en) 2016-11-01 2019-12-10 Microsoft Technology Licensing, Llc Thread importance based processor core parking and frequency selection
US10372494B2 (en) 2016-11-04 2019-08-06 Microsoft Technology Licensing, Llc Thread importance based processor core partitioning
JP2018106591A (en) * 2016-12-28 2018-07-05 ルネサスエレクトロニクス株式会社 Semiconductor device, operation control method, and program
KR20180098904A (en) * 2017-02-27 2018-09-05 삼성전자주식회사 Computing device and method for allocating power to the plurality of cores in the computing device
CN107479666B (en) * 2017-06-30 2020-11-27 Oppo广东移动通信有限公司 Terminal device, temperature rise control method, control device, and storage medium
WO2021081813A1 (en) * 2019-10-30 2021-05-06 阿里巴巴集团控股有限公司 Multi-core processor and scheduling method therefor, device, and storage medium
US20220334558A1 (en) * 2021-04-15 2022-10-20 Mediatek Inc. Adaptive thermal ceiling control system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060282692A1 (en) * 2005-06-10 2006-12-14 Lg Electronics Inc. Controlling power supply in a multi-core processor
US20100153954A1 (en) * 2008-12-11 2010-06-17 Qualcomm Incorporated Apparatus and Methods for Adaptive Thread Scheduling on Asymmetric Multiprocessor
US20100169692A1 (en) * 2006-05-03 2010-07-01 Edward Burton Mechanism for adaptively adjusting a direct current loadline in a multi-core processor

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6804632B2 (en) 2001-12-06 2004-10-12 Intel Corporation Distribution of processing activity across processing hardware based on power consumption considerations
US7502948B2 (en) * 2004-12-30 2009-03-10 Intel Corporation Method, system, and apparatus for selecting a maximum operation point based on number of active cores and performance level of each of the active cores
US7555664B2 (en) * 2006-01-31 2009-06-30 Cypress Semiconductor Corp. Independent control of core system blocks for power optimization
US8051276B2 (en) * 2006-07-07 2011-11-01 International Business Machines Corporation Operating system thread scheduling for optimal heat dissipation
US7949887B2 (en) 2006-11-01 2011-05-24 Intel Corporation Independent power control of processing cores
US7865751B2 (en) * 2007-06-18 2011-01-04 Intel Corporation Microarchitecture controller for thin-film thermoelectric cooling
US8555283B2 (en) * 2007-10-12 2013-10-08 Oracle America, Inc. Temperature-aware and energy-aware scheduling in a computer system
US8578193B2 (en) 2007-11-28 2013-11-05 International Business Machines Corporation Apparatus, method and program product for adaptive real-time power and perfomance optimization of multi-core processors
US8024590B2 (en) 2007-12-10 2011-09-20 Intel Corporation Predicting future power level states for processor cores
US8010822B2 (en) * 2008-03-28 2011-08-30 Microsoft Corporation Power-aware thread scheduling and dynamic use of processors
US8296773B2 (en) * 2008-06-30 2012-10-23 International Business Machines Corporation Systems and methods for thread assignment and core turn-off for integrated circuit energy efficiency and high-performance
US8402290B2 (en) * 2008-10-31 2013-03-19 Intel Corporation Power management for multiple processor cores
US20110191602A1 (en) * 2010-01-29 2011-08-04 Bearden David R Processor with selectable longevity
US8942932B2 (en) * 2010-08-31 2015-01-27 Advanced Micro Devices, Inc. Determining transistor leakage for an integrated circuit

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060282692A1 (en) * 2005-06-10 2006-12-14 Lg Electronics Inc. Controlling power supply in a multi-core processor
US20100169692A1 (en) * 2006-05-03 2010-07-01 Edward Burton Mechanism for adaptively adjusting a direct current loadline in a multi-core processor
US20100153954A1 (en) * 2008-12-11 2010-06-17 Qualcomm Incorporated Apparatus and Methods for Adaptive Thread Scheduling on Asymmetric Multiprocessor

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BERGAMASCHI, R., ET AL.: "Exploring Power Management in Multi-Core Systems", ASP-DAC'08, 21 January 2008 (2008-01-21), pages 708 - 710, XP031241442 *
See also references of EP2643741A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9552046B2 (en) 2012-09-21 2017-01-24 Htc Corporation Performance management methods for electronic devices with multiple central processing units

Also Published As

Publication number Publication date
US20130238912A1 (en) 2013-09-12
CN103229123A (en) 2013-07-31
EP2643741A4 (en) 2016-08-24
US9335805B2 (en) 2016-05-10
CN103229123B (en) 2016-08-31
EP2643741A1 (en) 2013-10-02

Similar Documents

Publication Publication Date Title
US9335805B2 (en) Method and apparatus for managing power in a multi-core processor
JP6005895B1 (en) Intelligent multi-core control for optimal performance per watt
US9378536B2 (en) CPU/GPU DCVS co-optimization for reducing power consumption in graphics frame processing
US9405340B2 (en) Apparatus and method to implement power management of a processor
EP2823459B1 (en) Execution of graphics and non-graphics applications on a graphics processing unit
US20130074077A1 (en) Methods and Apparatuses for Load Balancing Between Multiple Processing Units
US20210351587A1 (en) Interface circuitry with multiple direct current power contacts
TWI557541B (en) Fine grained power management in virtualized mobile platforms
KR20150063543A (en) Controlling configurable peak performance limits of a processor
US11144085B2 (en) Dynamic maximum frequency limit for processing core groups
US9747038B2 (en) Systems and methods for a hybrid parallel-serial memory access
JP2017515232A (en) Dynamic load balancing of hardware threads in a cluster processor core using shared hardware resources and associated circuits, methods, and computer readable media
CN109564458A (en) Application program is specific, performance aware it is energy-optimised
US9829952B2 (en) Processor that has its operating frequency controlled in view of power consumption during operation and semiconductor device including the same
CN112084023A (en) Data parallel processing method, electronic equipment and computer readable storage medium
US9817759B2 (en) Multi-core CPU system for adjusting L2 cache character, method thereof, and devices having the same
US11933843B2 (en) Techniques to enable integrated circuit debug across low power states
CN115775199A (en) Data processing method and device, electronic equipment and computer readable storage medium
CN110096315B (en) Component loading method and device
US9625526B2 (en) Method and apparatus for scan chain data management
US8307226B1 (en) Method, apparatus, and system for reducing leakage power consumption
US10628367B2 (en) Techniques for dynamically modifying platform form factors of a mobile device
US11934248B2 (en) Performance and power tuning user interface
US20240111560A1 (en) Workload linked performance scaling for servers
US20220291733A1 (en) Methods and apparatus to reduce display connection latency

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10860047

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 13989280

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2010860047

Country of ref document: EP