WO2007113744A1 - A device and a method for power scheduling of data requests - Google Patents

A device and a method for power scheduling of data requests Download PDF

Info

Publication number
WO2007113744A1
WO2007113744A1 PCT/IB2007/051094 IB2007051094W WO2007113744A1 WO 2007113744 A1 WO2007113744 A1 WO 2007113744A1 IB 2007051094 W IB2007051094 W IB 2007051094W WO 2007113744 A1 WO2007113744 A1 WO 2007113744A1
Authority
WO
WIPO (PCT)
Prior art keywords
requests
auxiliary
executed
priority
data requests
Prior art date
Application number
PCT/IB2007/051094
Other languages
French (fr)
Inventor
Gilein De Nijs
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2007113744A1 publication Critical patent/WO2007113744A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/325Power saving in peripheral device
    • G06F1/3268Power saving in hard disk drive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the invention relates to a device for power scheduling of data requests.
  • the invention further relates to a method for power scheduling of data requests.
  • the invention further relates to a program element.
  • the invention further relates to a computer-readable medium.
  • the power consumed translates directly into cost for an end user.
  • the power consumed by a home video cassette recorder results in an energy bill.
  • the power consumed also translates into a certain time for which a device will operate. For example, when a portable audio player runs on batteries. In general saving power is always of benefit to an end user. This is true for both situations in the home and on the move.
  • Electronic devices operating in the Consumer Electronics domain generally have further requirements of providing a guaranteed quality of service to an end user.
  • an audio playback device should always play the audio desired by an end user and a video playback device should always play the video desired by an end user and this should be performed without noticeable glitches in the audio or video playback. Therefore, such devices often work in a streaming manner, i.e. with real time guarantees.
  • a buffer is generally the remaining memory space that is not used by any processes running on the device. In this way all available remaining space is used to optimize the power consumption.
  • WO 2004/066293 provides energy efficient disk scheduling for mobile applications in the presence of both streaming requests and auxiliary requests.
  • the auxiliary requests are requests which concern auxiliary information, such as program data and executable code, libraries, database requests, user interface driven requests, network driven requests etc.
  • Such auxiliary requests may also be the normal requests generated on any general purpose computer known to the skilled person and may be defined as those requests that do not possess a real time deadline for execution.
  • a commonly used term for such requests are best effort requests, in that the device or system executing them generally makes the best effort possible to execute such requests in the shortest possible time.
  • streaming requests and auxiliary requests are scheduled in a manner that optimizes power consumption by transitioning the operation mode of a disk supplying information between a non-operating, i.e. low power, mode and an operating, i.e. high-power, mode.
  • Requests are scheduled such that the streaming requests are guaranteed to be serviced in time whilst taking into account the priority of the auxiliary requests.
  • Use is made of a buffer to hold data for processing whilst the disk may be set in standby mode or powered-off completely.
  • An energy saving scheduling means ensures that the quality of service remains guaranteed by filling the buffer when necessary.
  • auxiliary requests Two priorities are envisioned for auxiliary requests, namely, high priority auxiliary requests which force the disk to enter the operating mode immediately, wherein all pending streaming requests are also executed to completely re-fill the buffer, or low priority auxiliary requests which are postponed until the next occasion upon which the disk transitions to an operating mode. This is generally caused by the execution of a subsequent streaming request.
  • auxiliary requests can lead to delays ranging from lO's of seconds, for example, when high definition video is also being serviced, to lO's of minutes, for example, when only compressed audio is being serviced. This therefore depends completely upon the streaming requests being serviced. Therefore the only option open is to assign a high priority to such requests, whilst they are, in fact, not high priority requests.
  • auxiliary requests in a regular manner, i.e. periodically, this may lead to the disk never being powered down and therefore no power conservation. Such periodic operation is again common in networking applications.
  • a mobile audio/video device with storage and networking facilities may stream information from the built-in storage device to a remote display whilst simultaneously allowing a user to browse the Internet or his/her home network using a suitable browser application.
  • Such newer devices would also benefit from the optimization of power consumption in the presence of both streaming requests and a range of different auxiliary requests whilst providing a guaranteed quality of service, however, the prior art gives no indications as to how this may optimally be achieved. The inventors recognizing this problem devised the present invention.
  • the present invention seeks to address one or more shortcomings of the prior art.
  • a device for power scheduling of data requests comprising auxiliary requests regarding auxiliary information
  • the device comprising a storage means adapted to store and/or retrieve information defined by the data requests, a priority determination unit adapted to determine a priority of each one of the auxiliary requests, a timeout assignment unit adapted to assign a predetermined timeout parameter to each one of the auxiliary requests dependent upon the priority determined, a request scheduling unit adapted to determine the data requests, if any, that are to be executed based upon the predetermined timeout parameter and a storage means mode controller adapted to set the storage means to a non-operating mode when no data requests are pending and set the storage means to an operating mode when at least one of the data requests is to be executed.
  • a method for power scheduling of data requests comprising auxiliary requests regarding auxiliary information
  • the method comprising the steps of receiving the data requests defining information to be stored and/or retrieved, determining a priority of each one of the auxiliary requests, assigning a predetermined timeout parameter to each one of the auxiliary requests dependent upon the priority determined, determining the data requests, if any, that are to be executed based upon the predetermined timeout parameter and setting the storage means to a non-operating mode when no data requests are pending and setting the storage means to an operating mode when at least one of the data requests is to be executed.
  • a program element is provided, the program element being directly loadable into the memory of a programmable device, comprising software code portions for performing, when said program element is run on the device, the method steps of receiving data requests defining information to be stored and/or retrieved, the data requests comprising auxiliary requests regarding auxiliary information, determining a priority of each one of the auxiliary requests, assigning a predetermined timeout parameter to each one of the auxiliary requests dependent upon the priority determined, determining the data requests, if any, that are to be executed based upon the predetermined timeout parameter and setting the storage means to a non-operating mode when no data requests are pending and setting the storage means to an operating mode when at least one of the data requests is to be executed.
  • a computer-readable medium directly loadable into the memory of a programmable device, comprising software code portions for performing, when said code portions are run on the device, the method steps of receiving data requests defining information to be stored and/or retrieved, the data requests comprising auxiliary requests regarding auxiliary information, determining a priority of each one of the auxiliary requests, assigning a predetermined timeout parameter to each one of the auxiliary requests dependent upon the priority determined, determining the data requests, if any, that are to be executed based upon the predetermined timeout parameter and setting the storage means to a non- operating mode when no data requests are pending and setting the storage means to an operating mode when at least one of the data requests is to be executed.
  • the data requests may further comprise streaming requests regarding real time information and a predicted spin up time to satisfy the streaming requests may be determined.
  • Real time information is information having a real time priority and is not limited to only information arriving in real time.
  • stored video is to be understood as real time information in the context of this specification since the display of the stored video has a real time aspect.
  • the auxiliary requests for which the predetermined timeout parameter is smaller than the time until the next predicted spin up time may be marked as data requests to be executed. This allows an immediate decision to be taken whether a postponement of the auxiliary requests is sensible, or not.
  • the expiration of the predetermined timeout parameter may be detected for at least one of the auxiliary requests. Upon said detection the auxiliary requests concerned may be marked as data requests to be executed. This ensures that auxiliary requests are executed within the time period indicated by the predetermined timeout parameter, even in the case that no further streaming requests or high priority requests are to be executed.
  • the predetermined timeout parameter may be assigned an execute immediately value for auxiliary requests determined to be of high priority. This may ensure the quickest possible response for auxiliary requests that, for example, a user is waiting upon.
  • auxiliary requests that are to be immediately executed and any pending streaming requests may be marked as data requests to be executed. This refills the buffer completely with any pending streaming requests allowing the next period of non-operation to be lengthened.
  • an infinite timeout value may be assigned to the predetermined timeout parameter should the priority be determined as low. Such an assignment allows low priority requests to be postponed until the next occasion when an operating mode is entered. This saves power for the complete period of time until the transition to the operating mode.
  • auxiliary requests that have been assigned an infinite timeout value may be marked as data requests to be executed when the mode of the storage means is set to an operating mode. Such a measure allows low priority requests to be executed when the transition to the operating mode occurs.
  • an intermediate timeout value between an execute immediately value and an infinite timeout value, may be assigned to the predetermined timeout parameter should the priority be determined as intermediate between a low and high. This measure allows a finer granularity to be achieved in the scheduling of requests.
  • a buffer may be used for temporarily storing the information and the streaming requests may be marked as requests to be executed based upon a filling level of the buffer.
  • additional data may be read immediately following the information and the additional data may be stored for providing the additional data in the event of receiving a next auxiliary request requesting the additional data.
  • the pre-reading of additional data is useful in systems using synchronous data requests to improve the responsiveness of the system.
  • the request scheduling unit may be further adapted to mark all pending data requests as data requests to be executed when the storage means is set to an operating mode. This allows the buffer to be filled completely with any future pending streaming requests or any auxiliary requests that were pending. Therefore, the next spin up time may be postponed further to the future.
  • a device may be realized as at least one of the group consisting of a Set-Top-Box device, a digital video recording device, a network-enabled device, a conditional access system, a portable audio player, a portable video player, a mobile phone, a DVD player, a CD player, a hard disk based media player, an Internet radio device, a computer, a television, a public entertainment device and an MP3 player.
  • the data processing required according to the invention can be realized by a computer program, that is to say by software, or by using one or more special electronic optimization circuits, that is to say in hardware, or in hybrid form, that is to say by means of software components and hardware components.
  • Fig. 1 illustrates a device for power scheduling of data requests.
  • Fig. 2 illustrates the structure and the interconnecting data paths of the device of Fig. 1.
  • Fig. 3 illustrates the structure of a scheduler and the internal scheduler data paths according to the prior art.
  • Fig. 4 illustrates a flow chart used for power scheduling according to the prior art.
  • Fig. 5 illustrates the structure of a scheduler and the internal scheduler data paths according to the present invention.
  • Fig. 6 illustrates the structure of a second scheduler according to the present invention.
  • Fig. 7 illustrates the structure of a third scheduler according to the present invention.
  • Fig. 8 illustrates a flow chart used for power scheduling according to the present invention.
  • Fig. 9 illustrates a second flow chart used for power scheduling according to the present invention.
  • Fig. 10 illustrates a third flow chart used for power scheduling according to the present invention.
  • Fig. 11 illustrates a timeline of a typical real time streaming buffer refill cycle with streaming requests and auxiliary requests in between the streaming requests.
  • Fig. 1 shows a device 100 according to the invention.
  • the device 100 comprises a storage means 170, which may be a hard disk drive, a floppy disc drive, a flash memory device or equivalent.
  • the storage means 170 may be used to store audio and video that a user 192 would like to preserve or render.
  • the device 100 may, for example, be a portable audio/video jukebox device running on a battery (not shown).
  • the device 100 may also comprise a codec 150 for encoding and/or decoding audio/video data streams for display on a local display 160.
  • An audio rendering device (not shown), such as a speaker, may also be present in the device 100.
  • the device 100 may also comprise a means for communication 130, such as an Ethernet interface, in wired or wireless form, a WiFi interface, a Bluetooth interface or a mobile phone network interface.
  • a network interface controller may also be understood as a means for communication 130.
  • the device 100 may then also receive one or more data streams via the means for communication 130 for decoding using the codec 150 and further display on the local display 160 or for storing in the storage means 170.
  • the means for communication 130 may also be used to transmit data streams to a display on a remote server 165 via network 180.
  • the network 180 may be a local network or a worldwide network such as the Internet.
  • the user 192 may interact with the device 100 using a user interface 190.
  • the user 192 interacts with the user interface 190 using a remote control 191, but other means of interaction are also possible.
  • the user 192 may interact with the device 100 using a touch screen, a scroll wheel, buttons, a mouse or other pointer device, a keyboard etc.
  • the means for communication 130 and the storage means 170 generally consume significant amounts of power during operation.
  • a buffer 110 may be used to temporarily store data streams such that component units, such as the storage means 170 and the means for communication 130, may be powered down. This ensures that the data streams may still be processed and that the quality of service expected by the user 192 is preserved.
  • the buffer 110 may be distributed, or split, amongst the storage means 170 and the means for communication 130.
  • the buffer 110 may also be split according to each stream when multiple streams are to be serviced. In such a case each stream may have a streaming buffer.
  • the splitting of the buffer 110 may be achieved by the use of control program running on a processor 120 and a system bus 140.
  • the system bus 140 may interconnect all of the component units comprised within the device 100, allowing the processor 120 to control each component unit.
  • FIG. 2 the structure and the interconnecting data paths of the device of Fig. 1 is shown as may be embodied by a suitable control program running on the processor 120 of Fig. 1.
  • a real time application 200 is shown generating streaming requests 205.
  • the streaming requests 205 are requests regarding real-time information. Such requests have deadlines that must be met to ensure that the quality of service is delivered that the user 192 expects.
  • a non real time application 210 may also be running on the processor 120.
  • the non real time application 210 may generate auxiliary requests 215.
  • the auxiliary requests 215 have no indication of real time constraints. Typically these are produced by applications that were not designed to be aware of real time constraints, though, this is not a requirement.
  • a typical example of such an application is a network application 220.
  • Network application 220 may also produce the auxiliary requests 215.
  • the network application 220 may, in fact, produce requests with real time constraints, even though no indication of the real time constraints is available.
  • Network application 220 may, for example, be an application using the Universal Plug and Play (UPnP) standard.
  • UPF Universal Plug and Play
  • the delays accepted might be of the order of 10's of seconds. Since such delays are orders of magnitude longer than those typically encountered in normal computer systems no measures are taken to guarantee the execution times of requests.
  • Another application having constraints on requests having a relatively long time period is a file system driver requiring consistency between the on-disk state and the in-memory state of the file system. Again, delays of the order of lO's of seconds were never expected when the application was designed.
  • the file system 230 may be any file system known to the skilled person and offer the functionality to map files, using names and directories, to logical locations on the storage means 170 where the information defined by the data requests is located.
  • the file system 230 is also aware of real time constraints and accesses the storage means 170 in blocks large enough to ensure a guaranteed quality of service even when files are fragmented.
  • the file system 230 passes the requests to a scheduler 240 where the data requests are scheduled in a manner to guarantee the quality of service of the streaming requests 205 to the user 192.
  • the scheduler 240 sends scheduled requests 250 to the storage means 170.
  • the scheduler 240 is also capable of setting the operating mode of the storage means using a mode control signal 260.
  • the scheduler 240 can set the storage means 170 to an operating mode when data requests are to be executed or to a non-operating mode when no requests are required to be executed using the mode control signal 260. In the non-operating mode power is saved.
  • the storage means 170 is a hard disk drive
  • the operating mode may be a read/write mode, a performance idle mode, an active idle mode, a low power idle mode etc.
  • the non-operating mode may be a standby mode, a sleep mode, a powered down mode etc.
  • the hard disk drive may be completely isolated from the device 100 to ensure maximum power saving by isolating any interconnecting interface. This can be achieved using field effect transistors, FETs, for example.
  • buffer 110 may be 32, 64, 128 Megabytes or even larger.
  • Buffer 110 may be Dynamic Random Access Memory, DRAM, Synchronous Dynamic Random Access Memory, SDRAM or any suitable memory technology known to the skilled person.
  • the buffer 110 uses a low power version of the memory technology.
  • the postponing of the auxiliary requests 215 can lead to delays ranging from lO's of seconds, for example, when high definition video is also being serviced, to 10's of minutes, for example, when only compressed audio is being serviced, even for relatively small buffer sizes. This depends completely upon the streaming requests being serviced and is governed by the user 192 and is not predictable.
  • Fig. 3 illustrates scheduler 240 constructed according to the prior art.
  • the streaming requests 205 and the auxiliary requests 215 enter a priority determination unit 300.
  • the priority determination unit 300 may determine a priority 315 of each auxiliary request 215 and a streaming request priority 305 of each streaming request 205.
  • the priority may have been assigned in an application, such as the real time application 200, the non real time application 210 or the network application 220.
  • the priority may also be implicitly assigned merely by the interface via which the request is received.
  • multiple Application Programming Interfaces, or API's may be defined for the various request priorities.
  • the scheduler 240 may have a real time API, a high priority auxiliary request API, a low priority auxiliary request API, etc.
  • the streaming request priority 305 is entirely optional since the interface via which the streaming requests 205 are received may be indication enough that the streaming requests 205 are real time requests.
  • the streaming requests 205 and the auxiliary requests 215 are placed in a request queue 310. In most operating systems used in computer systems, each read or write request gets added to a queue, such as the request queue 310.
  • a request scheduling unit 320 analyzes the request queue 310 and schedules the streaming requests 205 to guarantee a predetermined quality of service for the user 192.
  • the skilled person would also recognize the terms elevator or I/O scheduler, as being equivalent to request scheduling unit 320.
  • request scheduling are well known to the skilled person. For example, suitable methods could be a Round-Robin method, an earliest deadline first method, a single sweep method, a dual sweep method, etc.
  • the request scheduling unit 320 may mark the streaming requests 205 that are required to meet any deadlines. Therefore, the request scheduling unit 320 executes the requests by sending them to the storage means 170, such as a hard disk drive, possibly re-ordered for efficiency.
  • the request queue 310 may be implemented as multiple request queues, one for each priority.
  • Requests on a lower priority request queue may only get executed after all higher priority request queues are empty, though, this is not essential.
  • the execution of requests requires that the storage means 170 is set to an operating mode. This may be achieved using a storage means mode controller 330.
  • the request scheduling unit 320 may communicate to the storage means mode controller 330 that requests have been marked for execution using a mode indication signal 340.
  • the storage means mode controller 330 may then set the operating mode of the storage means 170 using the mode control signal 260.
  • the prior art of Fig. 2 may, therefore, discriminate the auxiliary requests 215 using the priority 315.
  • Fig. 4 a flowchart is illustrated, for the device of Fig. 3, indicating the operation of the prior art, WO 2004, 066293 Al.
  • the storage means 170 is in a non-operating mode to save power, i.e. it is spun down.
  • data requests are received.
  • the data requests are filtered into streaming requests 205 which will force the transition of the storage means 170 to an operating mode and auxiliary requests 215, which may, or may not, cause a transition of the operating mode.
  • the terminology RT means real time.
  • the operating mode transition occurs in step 420.
  • the requests are handled at step 425 along with any other pending requests at step 430.
  • the storage means 170 is transitioned to the non-operating mode, i.e. it is then spun down or powered down, in step 435.
  • the end of the process is reached at step 440.
  • the process then waits for new requests or some other suitable trigger to change the operating mode.
  • auxiliary requests 215 are filtered at step 410 the prior art, WO 2004, 066293 Al, discriminates the priority 315 of the auxiliary requests 215 into high priority auxiliary requests and low priority auxiliary requests at step 450.
  • the high priority auxiliary requests are intended for auxiliary requests that require an immediate response. These may be, for example, auxiliary requests that the user 192 has initiated via the user interface 190. For these high priority auxiliary requests the process transfers to step 420.
  • high priority auxiliary requests cause the storage means 170 to immediately transition to an operating mode and are, therefore, treated in a similar manner to the streaming requests 205.
  • any pending requests be they streaming requests 205 or auxiliary requests 215, may also be executed, at step 430, after which the storage means 170 may be set to a non-operating mode in step 435.
  • the auxiliary requests 215 which are determined to be low priority auxiliary requests at step 450 are entered into the request queue 310 at step 460. They are not marked for execution and, therefore, the storage means 170 remains in a non-operating mode to save power.
  • low priority auxiliary requests remain in a wait state, at step 470, until the storage means 170 is transitioned to an operating mode. As described earlier, this may be after a considerable period of time. Thereafter the process ends at step 480. The process may then begin again.
  • auxiliary requests 215 which have inherent real time constraints which are not explicitly catered for, such as the auxiliary requests 215 resulting from the network application 220, may therefore encounter problems. This is because such requests must be treated as high priority auxiliary requests. The disadvantages of which have already been described.
  • the scheduler 240 may be constructed as illustrated in Fig. 5.
  • the streaming requests 205 and the auxiliary requests 215 enter the priority determination unit 300.
  • the priority determination unit 300 may determine the priority 315 of each auxiliary request 215 and the streaming request priority 305 of each streaming request 205. This may be performed in a similar manner as to that shown in Fig. 3 and using the relevant steps of the process of Fig. 4. Again, the streaming request priority 305 is entirely optional since the interface via which the streaming requests 205 are received may be indication enough that the streaming requests 205 are real time requests.
  • the streaming requests 205 and the auxiliary requests 215 are communicated to a timeout assignment unit 510.
  • the timeout assignment unit 510 assigns a predetermined timeout parameter 515 to the auxiliary request 215.
  • the timeout assignment unit 510 may also assign a streaming request predetermined timeout value 505 according to the streaming request priority 305.
  • the predetermined timeout parameter 515 specifies how long a request may be delayed before it is considered for execution. This does not need to be a hard guarantee, as for the streaming requests 205, because higher priority requests may always be executed first. This then does not break the priority system. However, under normal circumstances where a system is not heavily overloaded an application that issues a request with a certain priority can rely on the fact that it will be delayed at most the number of seconds defined by the predetermined timeout parameter 515 of that specific priority level.
  • the highest priority request queue may have the predetermined timeout parameter 515 set to 0 seconds and thus such requests will be executed immediately.
  • the lowest priority request queue may have the predetermined timeout parameter 515 set to an infinite number of seconds and thus such requests will always be delayed until the next streaming buffer refill.
  • the assignment may be based on a predefined table linking priorities to timeout values.
  • the table may be a static table created at design time or may be dynamically updated using a suitable API. An example of such a table is shown in Table 1.
  • Table 1 Example assignment of timeout values of different request queue priorities.
  • RT is real time
  • BEx is best effort of priority x, where best effort is a commonly used term for auxiliary requests.
  • BEl may be defined as an execute immediately value for the predetermined timeout parameter 515 and may be used for auxiliary requests 215 with a high priority.
  • BE7 may be defined as an infinite timeout value for the predetermined timeout parameter 515 and may be used for auxiliary requests 215 with a low priority.
  • the values BE2 through BE6 may be defined as intermediate timeout values for the predetermined timeout parameter 515 and may be used for auxiliary requests 215 with a priority between a low priority and a high priority.
  • timeout values may be a predefined function of the request priority. Again, such a function may be static or dynamic in nature.
  • the timeout assignment unit 510 communicates the predetermined timeout parameter 515, and optionally the streaming request predetermined timeout value 505, to the request queue 310.
  • the streaming requests 205 and the auxiliary requests 215 may be placed in the request queue 310 by the timeout assignment unit 510 as shown in Fig. 5, though this is not essential.
  • the streaming requests 205 and the auxiliary requests 215 may be placed in the request queue 310 by the priority determination unit 300, as shown in Fig. 4.
  • the request scheduling unit 320 analyzes the request queue 310 and schedules the streaming requests 205 to guarantee a predetermined quality of service for the user 192 in a similar manner to that shown in Fig. 3 and may use the related process steps of Fig. 4 for this purpose.
  • a process is illustrated for use in the embodiment of Fig. 5.
  • the process may be implemented as a control program running on processor 120.
  • the storage means 170 is in a non-operating mode to save power, i.e. it is spun down.
  • data requests are received.
  • the priority of the data request is determined as described in the description of Fig. 5 relating to the priority determination unit 300.
  • the data requests may optionally be filtered into streaming requests 205 which will force the transition of the storage means 170 to an operating mode and auxiliary requests 215 which may, or may not, force the transition of the storage means 170 to an operating mode.
  • step 810 may discriminate the priority 315 of the auxiliary requests 215 into high priority auxiliary requests, intermediate priority requests and low priority auxiliary requests.
  • step 820 the predetermined timeout value 515 is assigned to the auxiliary request 215 based upon the priority 315. As has been described in the description relating to the timeout assignment unit 510 of Fig. 5 this may be performed using a table, a function or any other suitable means.
  • the requests are determined that are to be executed.
  • the requests to be executed may be marked as requests to be executed. This has been described in relation to the request scheduling unit 320 of Fig. 5 and in the description relating to the example using Table 1.
  • the requests may remain in the request queue 310 even when they are marked as requests to be executed. In such a case each request should have a flag indicating whether the request is marked to be executed or not.
  • the requests pending may be scanned to identify which requests are marked as requests to be executed. If there are no requests marked as requests to be executed then the process progresses to step 850 wherein the requests are queued. The requests may then be queued in the request queue 310.
  • the requests were already in the request queue 310 they will remain in the request queue 310, but they will not be marked, or treated, as requests to be executed. If, on the other hand, there are requests marked as requests to be executed then the process progresses to step 860.
  • the storage means 170 immediately transitions to an operating mode and the requests to be executed are, therefore, treated in a similar manner to the streaming requests 205.
  • the auxiliary requests 215 that are marked as requests to be executed and any pending requests, be they streaming requests 205 or auxiliary requests 215, may also be executed. This occurs at step 870.
  • the storage means 170 is transitioned to a non- operating mode to save power. This may be for a considerable period of time. The process of Fig. 8 may then be repeated.
  • FIG. 6 A further embodiment is illustrated in Fig. 6.
  • the embodiment of Fig. 6 differs from that of Fig. 5 in that the request scheduling unit 320 further comprises a prediction unit 600.
  • the storage means 170 is a hard disk drive, this may be a spin up prediction unit.
  • the request queue 310 (or queues) with intermediate priority, i.e. between the highest and lowest priority, may have the predetermined timeout parameter 515 set to a typical timeout value of about 5 to 30 seconds, depending on the priority of the request queue.
  • the request scheduling unit 320 may now make a decision on how to act upon the request submitted to the request queue 310.
  • the request scheduling unit 320 may check the level of the buffer 110 and determine, using the prediction unit 600, how long it will be before the buffer 110 needs to be refilled to satisfy the streaming requests 205. If the buffer 110 needs to be refilled within the time period defined by the predetermined timeout parameter 515, the request scheduling unit 320 may postpone the execution of the auxiliary request 215 until that moment, so the storage means 170 does not have to spin up too early. When the buffer 110 does not need to be refilled within the time period defined by the predetermined timeout parameter 515, the auxiliary request 215 is executed immediately. The storage means 170 is spun up too early, but the delay of the request and thus the lag of the system may be kept to a minimum. The request scheduling unit 320 is therefore further adapted to determine the predicted spin up time of the storage means 170 to satisfy the streaming requests 205.
  • the horizontal axis 1100 is time in seconds and illustrates a timeline of a typical real-time streaming buffer refill cycle with streaming requests and auxiliary requests in between the streaming requests.
  • the buffer 110 is refilled twice, at 0 seconds and at 55 seconds.
  • Three auxiliary requests are queued, at different priorities.
  • a request with priority 6 1110 At 40 seconds a request with priority 5 1120 and at 52 seconds a request with priority 2 1130. Supposing that the combined streams would have such a bit rate and the buffer 110 would have such capacity that the system needs to refill the buffer 110 every 55 seconds.
  • the request scheduling unit 320 would then generate streaming requests 205 at this 55 second interval.
  • the request with priority 6 1110 is executed immediately, although it is a low priority request. This is because the request scheduling unit 320 has determined, using the prediction unit 600, that it will be another 40 seconds before the storage means 170 has to spin up anyway and such a delay is not tolerable for the network application 220. The request is executed and the buffer 110 is filled again completely, although it was not yet empty. This is the most efficient method, because the storage means 170 is spinning anyway and the next refill time point will be further postponed to the future by the request scheduling unit 320.
  • the request with priority 5 1120 issued 40 seconds after the buffer 110 started filling will be postponed for 20 seconds and would get executed after the buffer 110 is filled again. This would be just within the predetermined timeout parameter 515 for a priority of 5 and would avoid having to spin up the storage means 170 too early.
  • the request with priority 2 1130 issued 3 seconds before the buffer 110 is to be filled is delayed for 4-5 seconds, although it is a fairly high priority request. It would not be desired that this request be delayed until the buffer 110 is full because that would take 8 seconds. This is longer than the predetermined timeout parameter 515 of that specific priority level, i.e. priority level 2.
  • auxiliary request 215 would be executed before the buffer 110 is completely filled, the storage means 170 would have to spin up earlier and thus power would be wasted.
  • the auxiliary request 215 may be delayed until after the buffer 110 is at such a level that enough data is held to provide the guarantees to the streaming application for the amount of time it would take to execute the auxiliary request 215. Then it is safe to execute the auxiliary request 215 and the request scheduling unit 320 may mark the auxiliary request 215 for execution. This way, the auxiliary request 215 pre-empts the filling of the buffer 110. Although the filling takes a little bit longer, the spinning up of the drive is postponed as long as possible while still executing the auxiliary request 215 in time.
  • a process is illustrated for use in the embodiment of Fig. 6 and is similar to the process of Fig. 8 in many respects.
  • the process illustrated in Fig. 9 may also be implemented as a control program running on processor 120. Again, it is assumed that the storage means 170 is initially in a non-operating mode to save power, i.e. it is spun down.
  • data requests are received.
  • the priority of the data request is determined as described in the description of Fig. 5 relating to the priority determination unit 300.
  • the data requests may also optionally be filtered into streaming requests 205 which will force the transition of the storage means 170 to an operating mode and auxiliary requests 215 which may, or may not, force the transition of the storage means 170 to an operating mode.
  • auxiliary requests 215 may discriminate the priority 315 of the auxiliary requests 215 into high priority auxiliary requests, intermediate priority requests and low priority auxiliary requests.
  • the predetermined timeout value 515 is assigned to the auxiliary request 215 based upon the priority 315. As has been described in the description relating to the timeout assignment unit 510 of Fig. 5 this may be performed using a table, like that as shown in Table 1, a function or any other suitable means.
  • the time of the next transition to an operating mode of the storage means 170 is predicted. This may be based on the filling of the buffer 110 and any pending streaming requests 205. This was described is relation to the prediction unit 600 of Fig. 6 and also in relation to the example of Fig. 11 using the parameters of Table 1.
  • the requests for which the predetermined timeout parameter 515 is smaller than the time until the next predicted spin up time may be marked as requests to be executed. This has been described in relation to the request scheduling unit 320 of Fig. 5.
  • the requests pending may be scanned to identify which requests are marked as requests to be executed. If there are no requests marked as requests to be executed then the process progresses to step 850 wherein the requests are queued. The requests may then be queued in the request queue 310. If, on the other hand, there are requests marked as requests to be executed then the process progresses to step 860.
  • the storage means 170 immediately transitions to an operating mode and the requests to be executed are, therefore, treated in a similar manner to the streaming requests 205.
  • the auxiliary requests 215 that are marked as requests to be executed and any pending requests, be they streaming requests 205 or auxiliary requests 215, may also be executed.
  • the requests marked as requests to be executed may be executed in order of priority, if there are deadlines that may be missed. Otherwise, the may be handled in order of efficiency. For example, requests may be ordered to optimize the seeking required to satisfy the requests marked as requests to be executed.
  • the storage means 170 is transitioned to a non-operating mode to save power.
  • FIG. 7 differs from that of Fig. 5 in that the scheduler 240 further comprises a timeout expiration detection unit 700. This is useful for durations when no streaming application is running, i.e. the streaming requests 205 are not being generated.
  • the timeout expiration detection unit 700 may examine the request queue 310 and receive the predetermined timeout parameter 515 for the auxiliary request 215. The timeout expiration detection unit 700 may then initiate an internal timer to trigger an event when time period of the predetermined timeout parameter 515 has elapsed.
  • the timeout expiration detection unit 700 may then check to see if the auxiliary request 215 has already been marked for execution. If this is not the case, the timeout expiration detection unit 700 may then send an indication 710 to the request scheduling unit 320 that the auxiliary request 215 should be marked for execution. The timeout expiration detection unit 700 may also directly mark the auxiliary request 215 for execution, though this is not shown in Fig. 7.
  • the request scheduling unit 320 then proceeds in a manner described earlier in the description relating to Fig. 5.
  • the auxiliary request 215 can either be executed immediately always, or the request scheduling unit 320 can wait until the indication 710 arrives so as to gather more requests and execute them all at once and thus save power.
  • the lowest priority requests may be delayed for an infinite amount of time and thus will only be executed whenever a higher priority request is issued. Alternatively, this lowest priority request may be assigned a very large, but not infinite, time-out value to avoid losing requests.
  • a default priority could be given to requests that do not state an explicit priority. Most requests cannot be delayed indefinitely, so the default priority may rarely be the lowest priority. Depending on the usage of the system, the default priority can be dynamically adapted.
  • Fig. 10 a more complicated process is illustrated for use in the embodiment of Fig. 6, Fig. 7 or a combination of Fig. 6 and Fig. 7 taking into account multiple types of data requests.
  • the process illustrated in Fig. 10 may also be implemented as a control program running on processor 120. Again, it is assumed that the storage means 170 is initially in a non-operating mode to save power, i.e. it is spun down.
  • data requests are received.
  • the requests may be streaming requests 205, also known as real time, or RT, requests, and/or auxiliary requests 215.
  • the streaming requests 205 are filtered from the auxiliary requests 215.
  • the streaming requests 205 cause the storage means 170 to be transitioned to an operating mode in step 860.
  • the details of step 860 have been described earlier in detail.
  • the streaming requests 205 are handled in a conventional manner by the storage means 170.
  • all other pending requests are also handled whilst the storage means 170 is still in the operating mode. The will ensure that the buffer 110 is filled and the saving of power is optimized.
  • the process of Fig. 10 continues at step 810.
  • the priority of the data request is determined as described in the description of Fig. 5 relating to the priority determination unit 300.
  • Step 810 may discriminate the priority 315 of the auxiliary requests 215 into high priority auxiliary requests, intermediate priority requests and low priority auxiliary requests.
  • the predetermined timeout value 515 is assigned to the auxiliary request 215 based upon the priority 315. As has been described in the description relating to the timeout assignment unit 510 of Fig. 5 this may be performed using a table, like that as shown in Table 1, a function or any other suitable means.
  • a check may be performed, at step 1020, to see if there are streams running, i.e. are the streaming requests 205 being generated.
  • step 900 the time of the next transition to an operating mode of the storage means 170 is predicted. This may be based on the filling of the buffer 110 and any pending streaming requests 205. This was described is relation to the prediction unit 600 of Fig. 6 and also in relation to the example of Fig. 11 using the parameters of Table 1.
  • step 1030 in Fig. 10 a check is performed on the predetermined timeout parameter 515 and the predicted spin up time. If predetermined timeout parameter 515 is smaller than the time until the next predicted spin up time then the process moves to step 910, where the associated request, or requests, is/are marked as requests to be executed. This has been described in relation to the request scheduling unit 320 of Fig. 5.
  • step 860 This also causes an execution of step 860 to set the storage means 170 into an operating mode.
  • the requests are then handled in step 870 and the storage means 170 is set to a non-operating mode in step 880. If predetermined timeout parameter 515 is not smaller than the time until the next predicted spin up time then the process moves to step 850 wherein the requests are queued.
  • step 850 the auxiliary requests 215 are queued.
  • step 1040 the process waits until a time period corresponding to the predetermined timeout parameter 515 has expired. This has been described in the description relating to the timeout expiration detection unit 700 of Fig. 7.
  • the auxiliary request 215 is marked for execution at step 910. This process step has been described previously.
  • the marking of the auxiliary request 215 as a request to be executed causes the storage means 170 to transition to an operating mode in step 860.
  • the auxiliary request 215 and any other pending requests are executed to fill the buffer 110 in step 870.
  • the storage means 170 is then transitioned to a non-operating mode in step 880.
  • the next spin up time may then be postponed to the future and the power may be used in an efficient manner.
  • the invention discloses methods and devices for power scheduling of data requests.
  • the data requests may comprise auxiliary requests 215 and streaming requests 205. Power is saved by setting the storage means 170 to a non-operating mode when no data requests are pending and to an operating mode when at least one data request is to be executed.
  • the auxiliary requests 215 each have a priority 315 that is used to assign a predetermined timeout value 515.
  • the predetermined timeout value indicates a maximum time for which each auxiliary request may be postponed. If the predetermined timeout value is exceeded the auxiliary request will marked as to be executed and the storage means 170 will transition to the operating mode.
  • the predicted spin up time for the next streaming request may also be compared to the predetermined timeout value and the auxiliary request may be executed immediately.
  • any of the embodiments described comprise implicit features, such as, an internal current supply, for example, a battery or an accumulator.
  • any reference signs placed in parentheses shall not be construed as limiting the claims.
  • the word “comprising” and “comprises”, and the like does not exclude the presence of elements or steps other than those listed in any claim or the specification as a whole.
  • the singular reference of an element does not exclude the plural reference of such elements and vice- versa.
  • a device claim enumerating several means several of these means may be embodied by one and the same item of hardware.
  • the mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A device and a method for power scheduling of data requests (205, 215) are disclosed. The data requests comprise auxiliary requests (215) and streaming requests (205). Power is saved by setting a storage means (170) to a non-operating mode when no data requests are pending and to an operating mode when at least one data request is to be executed. The auxiliary requests (215) each have a priority (315) that is used to assign a predetermined timeout value (515). The predetermined timeout value indicates a maximum time for which each auxiliary request may be postponed. If the predetermined timeout value is exceeded the auxiliary request will marked as to be executed and the storage means (170) will transition to the operating mode. The predicted spin up time for the next streaming request may also be compared to the predetermined timeout value and the auxiliary request may be executed immediately.

Description

A device and a method for power scheduling of data requests
FIELD OF THE INVENTION
The invention relates to a device for power scheduling of data requests. The invention further relates to a method for power scheduling of data requests.
The invention further relates to a program element.
The invention further relates to a computer-readable medium.
BACKGROUND OF THE INVENTION
All electronics devices consume power. The power consumed translates directly into cost for an end user. For example, the power consumed by a home video cassette recorder results in an energy bill. For mobile devices the power consumed also translates into a certain time for which a device will operate. For example, when a portable audio player runs on batteries. In general saving power is always of benefit to an end user. This is true for both situations in the home and on the move.
Electronic devices operating in the Consumer Electronics domain generally have further requirements of providing a guaranteed quality of service to an end user. For example, an audio playback device should always play the audio desired by an end user and a video playback device should always play the video desired by an end user and this should be performed without noticeable glitches in the audio or video playback. Therefore, such devices often work in a streaming manner, i.e. with real time guarantees. For dedicated devices it has been possible to tailor Consumer Electronics devices to provide a guaranteed quality of service in combination with optimized power consumption by using a buffer. The buffer is generally the remaining memory space that is not used by any processes running on the device. In this way all available remaining space is used to optimize the power consumption.
For example, WO 2004/066293 provides energy efficient disk scheduling for mobile applications in the presence of both streaming requests and auxiliary requests. The auxiliary requests are requests which concern auxiliary information, such as program data and executable code, libraries, database requests, user interface driven requests, network driven requests etc. Such auxiliary requests may also be the normal requests generated on any general purpose computer known to the skilled person and may be defined as those requests that do not possess a real time deadline for execution. A commonly used term for such requests are best effort requests, in that the device or system executing them generally makes the best effort possible to execute such requests in the shortest possible time.
In WO 2004/066293 streaming requests and auxiliary requests are scheduled in a manner that optimizes power consumption by transitioning the operation mode of a disk supplying information between a non-operating, i.e. low power, mode and an operating, i.e. high-power, mode. Requests are scheduled such that the streaming requests are guaranteed to be serviced in time whilst taking into account the priority of the auxiliary requests. Use is made of a buffer to hold data for processing whilst the disk may be set in standby mode or powered-off completely. An energy saving scheduling means ensures that the quality of service remains guaranteed by filling the buffer when necessary. Two priorities are envisioned for auxiliary requests, namely, high priority auxiliary requests which force the disk to enter the operating mode immediately, wherein all pending streaming requests are also executed to completely re-fill the buffer, or low priority auxiliary requests which are postponed until the next occasion upon which the disk transitions to an operating mode. This is generally caused by the execution of a subsequent streaming request.
Whilst the prior art achieves a saving in power in most situations it has been found that, on occasions, power is still unnecessarily wasted by forcing the choice between immediately executing auxiliary requests and indefinitely postponing auxiliary requests until the next transition to an operating mode. It has been found that some applications using auxiliary requests assume that requests are handled within a reasonable period of time, i.e. they do not demand immediate response, but they do expect a response within a relatively long time period (in computer terms), for example, within lO's of seconds. Examples of such applications are network driven applications wherein the application waits for a response from devices on the network. Such time periods are commonly implemented in, for example, the Universal Plug and Play (UPnP) standard. Another application having a relatively long time period (in computer terms) is a file system driver requiring consistency between the on- disk state and the in-memory state of the file system. Due to the large amount of buffering in modern devices postponing auxiliary requests can lead to delays ranging from lO's of seconds, for example, when high definition video is also being serviced, to lO's of minutes, for example, when only compressed audio is being serviced. This therefore depends completely upon the streaming requests being serviced. Therefore the only option open is to assign a high priority to such requests, whilst they are, in fact, not high priority requests. When such applications also produce such auxiliary requests in a regular manner, i.e. periodically, this may lead to the disk never being powered down and therefore no power conservation. Such periodic operation is again common in networking applications.
Such issues arise in newer devices that have emerged in which it has become possible to process multiple audio and video streams. For example, a mobile audio/video device with storage and networking facilities may stream information from the built-in storage device to a remote display whilst simultaneously allowing a user to browse the Internet or his/her home network using a suitable browser application. Such newer devices would also benefit from the optimization of power consumption in the presence of both streaming requests and a range of different auxiliary requests whilst providing a guaranteed quality of service, however, the prior art gives no indications as to how this may optimally be achieved. The inventors recognizing this problem devised the present invention.
BRIEF SUMMARY OF THE INVENTION
The present invention seeks to address one or more shortcomings of the prior art.
Accordingly, there is provided, in a first aspect of the present invention, a device for power scheduling of data requests, the data requests comprising auxiliary requests regarding auxiliary information, the device comprising a storage means adapted to store and/or retrieve information defined by the data requests, a priority determination unit adapted to determine a priority of each one of the auxiliary requests, a timeout assignment unit adapted to assign a predetermined timeout parameter to each one of the auxiliary requests dependent upon the priority determined, a request scheduling unit adapted to determine the data requests, if any, that are to be executed based upon the predetermined timeout parameter and a storage means mode controller adapted to set the storage means to a non-operating mode when no data requests are pending and set the storage means to an operating mode when at least one of the data requests is to be executed.
According to a second aspect of the invention a method is provided for power scheduling of data requests, the data requests comprising auxiliary requests regarding auxiliary information, the method comprising the steps of receiving the data requests defining information to be stored and/or retrieved, determining a priority of each one of the auxiliary requests, assigning a predetermined timeout parameter to each one of the auxiliary requests dependent upon the priority determined, determining the data requests, if any, that are to be executed based upon the predetermined timeout parameter and setting the storage means to a non-operating mode when no data requests are pending and setting the storage means to an operating mode when at least one of the data requests is to be executed.
According to a third aspect of the invention a program element is provided, the program element being directly loadable into the memory of a programmable device, comprising software code portions for performing, when said program element is run on the device, the method steps of receiving data requests defining information to be stored and/or retrieved, the data requests comprising auxiliary requests regarding auxiliary information, determining a priority of each one of the auxiliary requests, assigning a predetermined timeout parameter to each one of the auxiliary requests dependent upon the priority determined, determining the data requests, if any, that are to be executed based upon the predetermined timeout parameter and setting the storage means to a non-operating mode when no data requests are pending and setting the storage means to an operating mode when at least one of the data requests is to be executed.
According to a fourth aspect of the invention a computer-readable medium is provided, the computer-readable medium directly loadable into the memory of a programmable device, comprising software code portions for performing, when said code portions are run on the device, the method steps of receiving data requests defining information to be stored and/or retrieved, the data requests comprising auxiliary requests regarding auxiliary information, determining a priority of each one of the auxiliary requests, assigning a predetermined timeout parameter to each one of the auxiliary requests dependent upon the priority determined, determining the data requests, if any, that are to be executed based upon the predetermined timeout parameter and setting the storage means to a non- operating mode when no data requests are pending and setting the storage means to an operating mode when at least one of the data requests is to be executed.
In one embodiment the data requests may further comprise streaming requests regarding real time information and a predicted spin up time to satisfy the streaming requests may be determined. Real time information is information having a real time priority and is not limited to only information arriving in real time. For example, stored video is to be understood as real time information in the context of this specification since the display of the stored video has a real time aspect. The auxiliary requests for which the predetermined timeout parameter is smaller than the time until the next predicted spin up time may be marked as data requests to be executed. This allows an immediate decision to be taken whether a postponement of the auxiliary requests is sensible, or not. In a further embodiment the expiration of the predetermined timeout parameter may be detected for at least one of the auxiliary requests. Upon said detection the auxiliary requests concerned may be marked as data requests to be executed. This ensures that auxiliary requests are executed within the time period indicated by the predetermined timeout parameter, even in the case that no further streaming requests or high priority requests are to be executed.
In another embodiment the predetermined timeout parameter may be assigned an execute immediately value for auxiliary requests determined to be of high priority. This may ensure the quickest possible response for auxiliary requests that, for example, a user is waiting upon.
In yet another embodiment the auxiliary requests that are to be immediately executed and any pending streaming requests may be marked as data requests to be executed. This refills the buffer completely with any pending streaming requests allowing the next period of non-operation to be lengthened.
In an embodiment an infinite timeout value may be assigned to the predetermined timeout parameter should the priority be determined as low. Such an assignment allows low priority requests to be postponed until the next occasion when an operating mode is entered. This saves power for the complete period of time until the transition to the operating mode.
In another embodiment the auxiliary requests that have been assigned an infinite timeout value may be marked as data requests to be executed when the mode of the storage means is set to an operating mode. Such a measure allows low priority requests to be executed when the transition to the operating mode occurs.
In another embodiment an intermediate timeout value, between an execute immediately value and an infinite timeout value, may be assigned to the predetermined timeout parameter should the priority be determined as intermediate between a low and high. This measure allows a finer granularity to be achieved in the scheduling of requests.
In a further embodiment a buffer may be used for temporarily storing the information and the streaming requests may be marked as requests to be executed based upon a filling level of the buffer. These measures allow guarantees to be provided for the quality of service of the streaming requests.
In an embodiment additional data may be read immediately following the information and the additional data may be stored for providing the additional data in the event of receiving a next auxiliary request requesting the additional data. The pre-reading of additional data is useful in systems using synchronous data requests to improve the responsiveness of the system.
In a further embodiment the request scheduling unit may be further adapted to mark all pending data requests as data requests to be executed when the storage means is set to an operating mode. This allows the buffer to be filled completely with any future pending streaming requests or any auxiliary requests that were pending. Therefore, the next spin up time may be postponed further to the future.
In a further embodiment a device according to the invention may be realized as at least one of the group consisting of a Set-Top-Box device, a digital video recording device, a network-enabled device, a conditional access system, a portable audio player, a portable video player, a mobile phone, a DVD player, a CD player, a hard disk based media player, an Internet radio device, a computer, a television, a public entertainment device and an MP3 player. However, these applications are only exemplary.
The data processing required according to the invention can be realized by a computer program, that is to say by software, or by using one or more special electronic optimization circuits, that is to say in hardware, or in hybrid form, that is to say by means of software components and hardware components.
The aspects defined above and further aspects of the invention are apparent from the examples of embodiment to be described hereinafter and are explained with reference to these examples of embodiment.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will be described in more detail hereinafter with reference to examples of embodiment but to which the invention is not limited.
Fig. 1 illustrates a device for power scheduling of data requests.
Fig. 2 illustrates the structure and the interconnecting data paths of the device of Fig. 1.
Fig. 3 illustrates the structure of a scheduler and the internal scheduler data paths according to the prior art.
Fig. 4 illustrates a flow chart used for power scheduling according to the prior art.
Fig. 5 illustrates the structure of a scheduler and the internal scheduler data paths according to the present invention. Fig. 6 illustrates the structure of a second scheduler according to the present invention.
Fig. 7 illustrates the structure of a third scheduler according to the present invention.
Fig. 8 illustrates a flow chart used for power scheduling according to the present invention.
Fig. 9 illustrates a second flow chart used for power scheduling according to the present invention.
Fig. 10 illustrates a third flow chart used for power scheduling according to the present invention.
Fig. 11 illustrates a timeline of a typical real time streaming buffer refill cycle with streaming requests and auxiliary requests in between the streaming requests.
The Figures are schematically drawn and not true to scale, and the identical reference numerals in different Figures refer to corresponding elements. It will be clear for those skilled in the art, that alternative but equivalent embodiments of the invention are possible without deviating from the true inventive concept, and that the scope of the invention will be limited by the claims only.
DETAILED DESCRIPTION OF THE INVENTION
Fig. 1 shows a device 100 according to the invention. The device 100 comprises a storage means 170, which may be a hard disk drive, a floppy disc drive, a flash memory device or equivalent. The storage means 170 may be used to store audio and video that a user 192 would like to preserve or render. The device 100 may, for example, be a portable audio/video jukebox device running on a battery (not shown). The device 100 may also comprise a codec 150 for encoding and/or decoding audio/video data streams for display on a local display 160. An audio rendering device (not shown), such as a speaker, may also be present in the device 100. The device 100 may also comprise a means for communication 130, such as an Ethernet interface, in wired or wireless form, a WiFi interface, a Bluetooth interface or a mobile phone network interface. A network interface controller may also be understood as a means for communication 130. The device 100 may then also receive one or more data streams via the means for communication 130 for decoding using the codec 150 and further display on the local display 160 or for storing in the storage means 170. The means for communication 130 may also be used to transmit data streams to a display on a remote server 165 via network 180. The network 180 may be a local network or a worldwide network such as the Internet. The user 192 may interact with the device 100 using a user interface 190. Typically, the user 192 interacts with the user interface 190 using a remote control 191, but other means of interaction are also possible. For example, the user 192 may interact with the device 100 using a touch screen, a scroll wheel, buttons, a mouse or other pointer device, a keyboard etc.
The means for communication 130 and the storage means 170 generally consume significant amounts of power during operation. To reduce the power consumption a buffer 110 may be used to temporarily store data streams such that component units, such as the storage means 170 and the means for communication 130, may be powered down. This ensures that the data streams may still be processed and that the quality of service expected by the user 192 is preserved. For example, in Fig.l, the buffer 110 may be distributed, or split, amongst the storage means 170 and the means for communication 130. The buffer 110 may also be split according to each stream when multiple streams are to be serviced. In such a case each stream may have a streaming buffer. The splitting of the buffer 110 may be achieved by the use of control program running on a processor 120 and a system bus 140. The system bus 140 may interconnect all of the component units comprised within the device 100, allowing the processor 120 to control each component unit.
In Fig. 2 the structure and the interconnecting data paths of the device of Fig. 1 is shown as may be embodied by a suitable control program running on the processor 120 of Fig. 1. A real time application 200 is shown generating streaming requests 205. The streaming requests 205 are requests regarding real-time information. Such requests have deadlines that must be met to ensure that the quality of service is delivered that the user 192 expects. In addition to this a non real time application 210 may also be running on the processor 120. The non real time application 210 may generate auxiliary requests 215. The auxiliary requests 215 have no indication of real time constraints. Typically these are produced by applications that were not designed to be aware of real time constraints, though, this is not a requirement. A typical example of such an application is a network application 220. Network application 220 may also produce the auxiliary requests 215. The network application 220 may, in fact, produce requests with real time constraints, even though no indication of the real time constraints is available. Network application 220 may, for example, be an application using the Universal Plug and Play (UPnP) standard. In typical home networking situations the delays accepted might be of the order of 10's of seconds. Since such delays are orders of magnitude longer than those typically encountered in normal computer systems no measures are taken to guarantee the execution times of requests. Another application having constraints on requests having a relatively long time period (in computer terms) is a file system driver requiring consistency between the on-disk state and the in-memory state of the file system. Again, delays of the order of lO's of seconds were never expected when the application was designed.
In Fig. 2 all data requests may be addressed to a file system 230. The file system 230 may be any file system known to the skilled person and offer the functionality to map files, using names and directories, to logical locations on the storage means 170 where the information defined by the data requests is located. Preferably, the file system 230 is also aware of real time constraints and accesses the storage means 170 in blocks large enough to ensure a guaranteed quality of service even when files are fragmented. The file system 230 passes the requests to a scheduler 240 where the data requests are scheduled in a manner to guarantee the quality of service of the streaming requests 205 to the user 192. The scheduler 240 sends scheduled requests 250 to the storage means 170. The scheduler 240 is also capable of setting the operating mode of the storage means using a mode control signal 260. For example, the scheduler 240 can set the storage means 170 to an operating mode when data requests are to be executed or to a non-operating mode when no requests are required to be executed using the mode control signal 260. In the non-operating mode power is saved. When the storage means 170 is a hard disk drive the operating mode may be a read/write mode, a performance idle mode, an active idle mode, a low power idle mode etc. and the non-operating mode may be a standby mode, a sleep mode, a powered down mode etc. In a powered down mode the hard disk drive may be completely isolated from the device 100 to ensure maximum power saving by isolating any interconnecting interface. This can be achieved using field effect transistors, FETs, for example.
In handling auxiliary requests 215 having no indication of real time constraints a problem can occur because no mechanism was provided to ensure that such requests would be executed in time. This is because in normal operation on a computer system requests are generally executed very quickly. However, to save power the storage means 170 may be shutdown and the auxiliary requests 215 may be queued for very long periods of time. This is due to the large amount of buffering in modern devices. For example, buffer 110 may be 32, 64, 128 Megabytes or even larger. Buffer 110 may be Dynamic Random Access Memory, DRAM, Synchronous Dynamic Random Access Memory, SDRAM or any suitable memory technology known to the skilled person. Preferably, the buffer 110 uses a low power version of the memory technology. The postponing of the auxiliary requests 215 can lead to delays ranging from lO's of seconds, for example, when high definition video is also being serviced, to 10's of minutes, for example, when only compressed audio is being serviced, even for relatively small buffer sizes. This depends completely upon the streaming requests being serviced and is governed by the user 192 and is not predictable.
In general the only option open to a system designer is to assign a high priority to requests that do possesses a real time character, whilst they are, in fact, not high priority requests. When such applications also produce such requests in a regular manner, i.e. periodically, this may lead to the disk never being powered down and therefore no power conservation. Such periodic operation is again common in networking applications, such as by the network application 220.
Fig. 3 illustrates scheduler 240 constructed according to the prior art. The streaming requests 205 and the auxiliary requests 215 enter a priority determination unit 300. The priority determination unit 300 may determine a priority 315 of each auxiliary request 215 and a streaming request priority 305 of each streaming request 205. The priority may have been assigned in an application, such as the real time application 200, the non real time application 210 or the network application 220. The priority may also be implicitly assigned merely by the interface via which the request is received. For example, multiple Application Programming Interfaces, or API's, may be defined for the various request priorities. For example, the scheduler 240 may have a real time API, a high priority auxiliary request API, a low priority auxiliary request API, etc. How the priority is assigned to data requests is not relevant for the prior art or the present invention. Therefore, the streaming request priority 305 is entirely optional since the interface via which the streaming requests 205 are received may be indication enough that the streaming requests 205 are real time requests. The streaming requests 205 and the auxiliary requests 215 are placed in a request queue 310. In most operating systems used in computer systems, each read or write request gets added to a queue, such as the request queue 310.
A request scheduling unit 320 analyzes the request queue 310 and schedules the streaming requests 205 to guarantee a predetermined quality of service for the user 192. The skilled person would also recognize the terms elevator or I/O scheduler, as being equivalent to request scheduling unit 320. Many forms of request scheduling are well known to the skilled person. For example, suitable methods could be a Round-Robin method, an earliest deadline first method, a single sweep method, a dual sweep method, etc. The request scheduling unit 320 may mark the streaming requests 205 that are required to meet any deadlines. Therefore, the request scheduling unit 320 executes the requests by sending them to the storage means 170, such as a hard disk drive, possibly re-ordered for efficiency. The request queue 310 may be implemented as multiple request queues, one for each priority. Requests on a lower priority request queue may only get executed after all higher priority request queues are empty, though, this is not essential. The execution of requests requires that the storage means 170 is set to an operating mode. This may be achieved using a storage means mode controller 330. The request scheduling unit 320 may communicate to the storage means mode controller 330 that requests have been marked for execution using a mode indication signal 340. The storage means mode controller 330 may then set the operating mode of the storage means 170 using the mode control signal 260. The prior art of Fig. 2 may, therefore, discriminate the auxiliary requests 215 using the priority 315.
In Fig. 4 a flowchart is illustrated, for the device of Fig. 3, indicating the operation of the prior art, WO 2004, 066293 Al. Initially, it is assumed that the storage means 170 is in a non-operating mode to save power, i.e. it is spun down. At step 400 data requests are received. At step 410 the data requests are filtered into streaming requests 205 which will force the transition of the storage means 170 to an operating mode and auxiliary requests 215, which may, or may not, cause a transition of the operating mode. In Fig. 4 the terminology RT means real time. The operating mode transition occurs in step 420. The requests are handled at step 425 along with any other pending requests at step 430. To save power the storage means 170 is transitioned to the non-operating mode, i.e. it is then spun down or powered down, in step 435. The end of the process is reached at step 440. The process then waits for new requests or some other suitable trigger to change the operating mode.
When auxiliary requests 215 are filtered at step 410 the prior art, WO 2004, 066293 Al, discriminates the priority 315 of the auxiliary requests 215 into high priority auxiliary requests and low priority auxiliary requests at step 450. The high priority auxiliary requests are intended for auxiliary requests that require an immediate response. These may be, for example, auxiliary requests that the user 192 has initiated via the user interface 190. For these high priority auxiliary requests the process transfers to step 420. Such, high priority auxiliary requests cause the storage means 170 to immediately transition to an operating mode and are, therefore, treated in a similar manner to the streaming requests 205. When in the operating mode any pending requests, be they streaming requests 205 or auxiliary requests 215, may also be executed, at step 430, after which the storage means 170 may be set to a non-operating mode in step 435. According to the prior art, the auxiliary requests 215 which are determined to be low priority auxiliary requests at step 450 are entered into the request queue 310 at step 460. They are not marked for execution and, therefore, the storage means 170 remains in a non-operating mode to save power. Such, low priority auxiliary requests remain in a wait state, at step 470, until the storage means 170 is transitioned to an operating mode. As described earlier, this may be after a considerable period of time. Thereafter the process ends at step 480. The process may then begin again.
For auxiliary requests 215 which have inherent real time constraints which are not explicitly catered for, such as the auxiliary requests 215 resulting from the network application 220, may therefore encounter problems. This is because such requests must be treated as high priority auxiliary requests. The disadvantages of which have already been described.
To address the shortcomings of the prior art the scheduler 240 may be constructed as illustrated in Fig. 5. The streaming requests 205 and the auxiliary requests 215 enter the priority determination unit 300. The priority determination unit 300 may determine the priority 315 of each auxiliary request 215 and the streaming request priority 305 of each streaming request 205. This may be performed in a similar manner as to that shown in Fig. 3 and using the relevant steps of the process of Fig. 4. Again, the streaming request priority 305 is entirely optional since the interface via which the streaming requests 205 are received may be indication enough that the streaming requests 205 are real time requests. The streaming requests 205 and the auxiliary requests 215 are communicated to a timeout assignment unit 510. The timeout assignment unit 510 assigns a predetermined timeout parameter 515 to the auxiliary request 215. Optionally, the timeout assignment unit 510 may also assign a streaming request predetermined timeout value 505 according to the streaming request priority 305. The predetermined timeout parameter 515 specifies how long a request may be delayed before it is considered for execution. This does not need to be a hard guarantee, as for the streaming requests 205, because higher priority requests may always be executed first. This then does not break the priority system. However, under normal circumstances where a system is not heavily overloaded an application that issues a request with a certain priority can rely on the fact that it will be delayed at most the number of seconds defined by the predetermined timeout parameter 515 of that specific priority level. For example, if each priority has an individual request queue the highest priority request queue may have the predetermined timeout parameter 515 set to 0 seconds and thus such requests will be executed immediately. The lowest priority request queue may have the predetermined timeout parameter 515 set to an infinite number of seconds and thus such requests will always be delayed until the next streaming buffer refill. The assignment may be based on a predefined table linking priorities to timeout values. The table may be a static table created at design time or may be dynamically updated using a suitable API. An example of such a table is shown in Table 1.
Figure imgf000015_0001
Table 1 Example assignment of timeout values of different request queue priorities.
In Table 1 RT is real time, BEx is best effort of priority x, where best effort is a commonly used term for auxiliary requests. BEl may be defined as an execute immediately value for the predetermined timeout parameter 515 and may be used for auxiliary requests 215 with a high priority. BE7 may be defined as an infinite timeout value for the predetermined timeout parameter 515 and may be used for auxiliary requests 215 with a low priority. The values BE2 through BE6 may be defined as intermediate timeout values for the predetermined timeout parameter 515 and may be used for auxiliary requests 215 with a priority between a low priority and a high priority.
Rather than a table the timeout values may be a predefined function of the request priority. Again, such a function may be static or dynamic in nature. The timeout assignment unit 510 communicates the predetermined timeout parameter 515, and optionally the streaming request predetermined timeout value 505, to the request queue 310. The streaming requests 205 and the auxiliary requests 215 may be placed in the request queue 310 by the timeout assignment unit 510 as shown in Fig. 5, though this is not essential. For example, the streaming requests 205 and the auxiliary requests 215 may be placed in the request queue 310 by the priority determination unit 300, as shown in Fig. 4. The request scheduling unit 320 analyzes the request queue 310 and schedules the streaming requests 205 to guarantee a predetermined quality of service for the user 192 in a similar manner to that shown in Fig. 3 and may use the related process steps of Fig. 4 for this purpose.
In Fig. 8 a process is illustrated for use in the embodiment of Fig. 5. The process may be implemented as a control program running on processor 120. Initially, it is assumed that the storage means 170 is in a non-operating mode to save power, i.e. it is spun down. At step 800 data requests are received. At step 810 the priority of the data request is determined as described in the description of Fig. 5 relating to the priority determination unit 300. At step 810 the data requests may optionally be filtered into streaming requests 205 which will force the transition of the storage means 170 to an operating mode and auxiliary requests 215 which may, or may not, force the transition of the storage means 170 to an operating mode. For example, for the auxiliary requests 215 step 810 may discriminate the priority 315 of the auxiliary requests 215 into high priority auxiliary requests, intermediate priority requests and low priority auxiliary requests. At step 820 the predetermined timeout value 515 is assigned to the auxiliary request 215 based upon the priority 315. As has been described in the description relating to the timeout assignment unit 510 of Fig. 5 this may be performed using a table, a function or any other suitable means.
At step 830 the requests are determined that are to be executed. The requests to be executed may be marked as requests to be executed. This has been described in relation to the request scheduling unit 320 of Fig. 5 and in the description relating to the example using Table 1. Optionally, the requests may remain in the request queue 310 even when they are marked as requests to be executed. In such a case each request should have a flag indicating whether the request is marked to be executed or not. At step 840 the requests pending may be scanned to identify which requests are marked as requests to be executed. If there are no requests marked as requests to be executed then the process progresses to step 850 wherein the requests are queued. The requests may then be queued in the request queue 310. If, optionally, the requests were already in the request queue 310 they will remain in the request queue 310, but they will not be marked, or treated, as requests to be executed. If, on the other hand, there are requests marked as requests to be executed then the process progresses to step 860.
At step 860 the storage means 170 immediately transitions to an operating mode and the requests to be executed are, therefore, treated in a similar manner to the streaming requests 205. When in the operating mode the auxiliary requests 215 that are marked as requests to be executed and any pending requests, be they streaming requests 205 or auxiliary requests 215, may also be executed. This occurs at step 870. At step 880, after all of the pending requests have been executed, the storage means 170 is transitioned to a non- operating mode to save power. This may be for a considerable period of time. The process of Fig. 8 may then be repeated.
A further embodiment is illustrated in Fig. 6. The embodiment of Fig. 6 differs from that of Fig. 5 in that the request scheduling unit 320 further comprises a prediction unit 600. In the case where the storage means 170 is a hard disk drive, this may be a spin up prediction unit. The request queue 310 (or queues) with intermediate priority, i.e. between the highest and lowest priority, may have the predetermined timeout parameter 515 set to a typical timeout value of about 5 to 30 seconds, depending on the priority of the request queue. With the predetermined timeout parameter 515 associated with the auxiliary request 215, the request scheduling unit 320 may now make a decision on how to act upon the request submitted to the request queue 310.
For example, the request scheduling unit 320 may check the level of the buffer 110 and determine, using the prediction unit 600, how long it will be before the buffer 110 needs to be refilled to satisfy the streaming requests 205. If the buffer 110 needs to be refilled within the time period defined by the predetermined timeout parameter 515, the request scheduling unit 320 may postpone the execution of the auxiliary request 215 until that moment, so the storage means 170 does not have to spin up too early. When the buffer 110 does not need to be refilled within the time period defined by the predetermined timeout parameter 515, the auxiliary request 215 is executed immediately. The storage means 170 is spun up too early, but the delay of the request and thus the lag of the system may be kept to a minimum. The request scheduling unit 320 is therefore further adapted to determine the predicted spin up time of the storage means 170 to satisfy the streaming requests 205.
Using this mechanism, power can be saved by postponing auxiliary requests when possible, while applications (and the operating system) can specify how important early execution of a request is. When the storage means 170 is spun up to serve the auxiliary request 215 whose predetermined timeout parameter 515 was not long enough to justify waiting until the next streaming buffer refill, all other requests at all other priority levels will also be executed and the buffer 110 will be filled immediately, since the storage means 170 is operating, i.e. spinning, anyway.
When using the values from Table 1 the embodiment of Fig. 6 would act as depicted in Fig. 11. In Fig. 11, the horizontal axis 1100 is time in seconds and illustrates a timeline of a typical real-time streaming buffer refill cycle with streaming requests and auxiliary requests in between the streaming requests. The buffer 110 is refilled twice, at 0 seconds and at 55 seconds. Three auxiliary requests are queued, at different priorities. At 15 seconds a request with priority 6 1110, at 40 seconds a request with priority 5 1120 and at 52 seconds a request with priority 2 1130. Supposing that the combined streams would have such a bit rate and the buffer 110 would have such capacity that the system needs to refill the buffer 110 every 55 seconds. The request scheduling unit 320 would then generate streaming requests 205 at this 55 second interval.
If an application, such as the network application 220, or the operating system issues the request with priority 6 1110 at a time 15 seconds after the buffer 110 started filling, the request with priority 6 1110 is executed immediately, although it is a low priority request. This is because the request scheduling unit 320 has determined, using the prediction unit 600, that it will be another 40 seconds before the storage means 170 has to spin up anyway and such a delay is not tolerable for the network application 220. The request is executed and the buffer 110 is filled again completely, although it was not yet empty. This is the most efficient method, because the storage means 170 is spinning anyway and the next refill time point will be further postponed to the future by the request scheduling unit 320.
The request with priority 5 1120 issued 40 seconds after the buffer 110 started filling will be postponed for 20 seconds and would get executed after the buffer 110 is filled again. This would be just within the predetermined timeout parameter 515 for a priority of 5 and would avoid having to spin up the storage means 170 too early. Likewise, the request with priority 2 1130 issued 3 seconds before the buffer 110 is to be filled is delayed for 4-5 seconds, although it is a fairly high priority request. It would not be desired that this request be delayed until the buffer 110 is full because that would take 8 seconds. This is longer than the predetermined timeout parameter 515 of that specific priority level, i.e. priority level 2. If the auxiliary request 215 would be executed before the buffer 110 is completely filled, the storage means 170 would have to spin up earlier and thus power would be wasted. The auxiliary request 215 may be delayed until after the buffer 110 is at such a level that enough data is held to provide the guarantees to the streaming application for the amount of time it would take to execute the auxiliary request 215. Then it is safe to execute the auxiliary request 215 and the request scheduling unit 320 may mark the auxiliary request 215 for execution. This way, the auxiliary request 215 pre-empts the filling of the buffer 110. Although the filling takes a little bit longer, the spinning up of the drive is postponed as long as possible while still executing the auxiliary request 215 in time.
In Fig. 9 a process is illustrated for use in the embodiment of Fig. 6 and is similar to the process of Fig. 8 in many respects. The process illustrated in Fig. 9 may also be implemented as a control program running on processor 120. Again, it is assumed that the storage means 170 is initially in a non-operating mode to save power, i.e. it is spun down. At step 800 data requests are received. At step 810 the priority of the data request is determined as described in the description of Fig. 5 relating to the priority determination unit 300. At step 810 the data requests may also optionally be filtered into streaming requests 205 which will force the transition of the storage means 170 to an operating mode and auxiliary requests 215 which may, or may not, force the transition of the storage means 170 to an operating mode. For example, for the auxiliary requests 215 step 810 may discriminate the priority 315 of the auxiliary requests 215 into high priority auxiliary requests, intermediate priority requests and low priority auxiliary requests.
At step 820 the predetermined timeout value 515 is assigned to the auxiliary request 215 based upon the priority 315. As has been described in the description relating to the timeout assignment unit 510 of Fig. 5 this may be performed using a table, like that as shown in Table 1, a function or any other suitable means. At step 900 the time of the next transition to an operating mode of the storage means 170 is predicted. This may be based on the filling of the buffer 110 and any pending streaming requests 205. This was described is relation to the prediction unit 600 of Fig. 6 and also in relation to the example of Fig. 11 using the parameters of Table 1. At step 910 the requests for which the predetermined timeout parameter 515 is smaller than the time until the next predicted spin up time may be marked as requests to be executed. This has been described in relation to the request scheduling unit 320 of Fig. 5.
At step 840 the requests pending may be scanned to identify which requests are marked as requests to be executed. If there are no requests marked as requests to be executed then the process progresses to step 850 wherein the requests are queued. The requests may then be queued in the request queue 310. If, on the other hand, there are requests marked as requests to be executed then the process progresses to step 860.
At step 860 the storage means 170 immediately transitions to an operating mode and the requests to be executed are, therefore, treated in a similar manner to the streaming requests 205. When in the operating mode the auxiliary requests 215 that are marked as requests to be executed and any pending requests, be they streaming requests 205 or auxiliary requests 215, may also be executed. This occurs at step 870. The requests marked as requests to be executed may be executed in order of priority, if there are deadlines that may be missed. Otherwise, the may be handled in order of efficiency. For example, requests may be ordered to optimize the seeking required to satisfy the requests marked as requests to be executed. At step 880, after all of the pending requests have been executed, the storage means 170 is transitioned to a non-operating mode to save power. This may be for a considerable period of time. The process of Fig. 9 may then be repeated. A further embodiment is illustrated in Fig. 7. The embodiment of Fig. 7 differs from that of Fig. 5 in that the scheduler 240 further comprises a timeout expiration detection unit 700. This is useful for durations when no streaming application is running, i.e. the streaming requests 205 are not being generated. The timeout expiration detection unit 700 may examine the request queue 310 and receive the predetermined timeout parameter 515 for the auxiliary request 215. The timeout expiration detection unit 700 may then initiate an internal timer to trigger an event when time period of the predetermined timeout parameter 515 has elapsed. The timeout expiration detection unit 700 may then check to see if the auxiliary request 215 has already been marked for execution. If this is not the case, the timeout expiration detection unit 700 may then send an indication 710 to the request scheduling unit 320 that the auxiliary request 215 should be marked for execution. The timeout expiration detection unit 700 may also directly mark the auxiliary request 215 for execution, though this is not shown in Fig. 7.
The request scheduling unit 320 then proceeds in a manner described earlier in the description relating to Fig. 5. In the embodiment of Fig. 7 the auxiliary request 215 can either be executed immediately always, or the request scheduling unit 320 can wait until the indication 710 arrives so as to gather more requests and execute them all at once and thus save power. The lowest priority requests may be delayed for an infinite amount of time and thus will only be executed whenever a higher priority request is issued. Alternatively, this lowest priority request may be assigned a very large, but not infinite, time-out value to avoid losing requests. To work with applications that are not aware of the priority mechanism, a default priority could be given to requests that do not state an explicit priority. Most requests cannot be delayed indefinitely, so the default priority may rarely be the lowest priority. Depending on the usage of the system, the default priority can be dynamically adapted.
In Fig. 10 a more complicated process is illustrated for use in the embodiment of Fig. 6, Fig. 7 or a combination of Fig. 6 and Fig. 7 taking into account multiple types of data requests. In general, the individual process steps are similar to those of Fig. 8 and Fig. 9. The process illustrated in Fig. 10 may also be implemented as a control program running on processor 120. Again, it is assumed that the storage means 170 is initially in a non-operating mode to save power, i.e. it is spun down. At step 800 data requests are received. The requests may be streaming requests 205, also known as real time, or RT, requests, and/or auxiliary requests 215. At step 1000 the streaming requests 205 are filtered from the auxiliary requests 215. The streaming requests 205 cause the storage means 170 to be transitioned to an operating mode in step 860. The details of step 860 have been described earlier in detail. At step 1010 the streaming requests 205 are handled in a conventional manner by the storage means 170. At step 870 all other pending requests are also handled whilst the storage means 170 is still in the operating mode. The will ensure that the buffer 110 is filled and the saving of power is optimized.
If the incoming request was an auxiliary request 215 the process of Fig. 10 continues at step 810. At step 810 the priority of the data request is determined as described in the description of Fig. 5 relating to the priority determination unit 300. Step 810 may discriminate the priority 315 of the auxiliary requests 215 into high priority auxiliary requests, intermediate priority requests and low priority auxiliary requests. At step 820 the predetermined timeout value 515 is assigned to the auxiliary request 215 based upon the priority 315. As has been described in the description relating to the timeout assignment unit 510 of Fig. 5 this may be performed using a table, like that as shown in Table 1, a function or any other suitable means. After assigning the predetermined timeout value 515 a check may be performed, at step 1020, to see if there are streams running, i.e. are the streaming requests 205 being generated.
If there are streams running the process continues at step 900. At step 900 the time of the next transition to an operating mode of the storage means 170 is predicted. This may be based on the filling of the buffer 110 and any pending streaming requests 205. This was described is relation to the prediction unit 600 of Fig. 6 and also in relation to the example of Fig. 11 using the parameters of Table 1. At step 1030 in Fig. 10 a check is performed on the predetermined timeout parameter 515 and the predicted spin up time. If predetermined timeout parameter 515 is smaller than the time until the next predicted spin up time then the process moves to step 910, where the associated request, or requests, is/are marked as requests to be executed. This has been described in relation to the request scheduling unit 320 of Fig. 5. This also causes an execution of step 860 to set the storage means 170 into an operating mode. The requests are then handled in step 870 and the storage means 170 is set to a non-operating mode in step 880. If predetermined timeout parameter 515 is not smaller than the time until the next predicted spin up time then the process moves to step 850 wherein the requests are queued.
If there are no streams running the process continues at step 850. In such a situation there may never be a trigger to transition the storage means 170 into an operating mode. In step 850 the auxiliary requests 215 are queued. In the following step, step 1040, the process waits until a time period corresponding to the predetermined timeout parameter 515 has expired. This has been described in the description relating to the timeout expiration detection unit 700 of Fig. 7. Upon expiration of the predetermined timeout parameter 515 for the auxiliary request 215, the auxiliary request 215 is marked for execution at step 910. This process step has been described previously. The marking of the auxiliary request 215 as a request to be executed causes the storage means 170 to transition to an operating mode in step 860. The auxiliary request 215 and any other pending requests are executed to fill the buffer 110 in step 870. The storage means 170 is then transitioned to a non-operating mode in step 880. The next spin up time may then be postponed to the future and the power may be used in an efficient manner.
In summary the invention discloses methods and devices for power scheduling of data requests. The data requests may comprise auxiliary requests 215 and streaming requests 205. Power is saved by setting the storage means 170 to a non-operating mode when no data requests are pending and to an operating mode when at least one data request is to be executed. The auxiliary requests 215 each have a priority 315 that is used to assign a predetermined timeout value 515. The predetermined timeout value indicates a maximum time for which each auxiliary request may be postponed. If the predetermined timeout value is exceeded the auxiliary request will marked as to be executed and the storage means 170 will transition to the operating mode. The predicted spin up time for the next streaming request may also be compared to the predetermined timeout value and the auxiliary request may be executed immediately.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be capable of designing many alternative embodiments without departing from the scope of the invention as defined by the appended claims. Furthermore, any of the embodiments described comprise implicit features, such as, an internal current supply, for example, a battery or an accumulator. In the claims, any reference signs placed in parentheses shall not be construed as limiting the claims. The word "comprising" and "comprises", and the like, does not exclude the presence of elements or steps other than those listed in any claim or the specification as a whole. The singular reference of an element does not exclude the plural reference of such elements and vice- versa. In a device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims

CLAIMS:
1. A device (100) for power scheduling of data requests (205, 215), the data requests comprising auxiliary requests (215) regarding auxiliary information, the device comprising: a storage means (170) adapted to store and/or retrieve information defined by the data requests; a priority determination unit (300) adapted to determine a priority (315) of each one of the auxiliary requests; a timeout assignment unit (510) adapted to assign a predetermined timeout parameter (515) to each one of the auxiliary requests dependent upon the priority determined; a request scheduling unit (320) adapted to determine the data requests, if any, that are to be executed based upon the predetermined timeout parameter; and a storage means mode controller (330) adapted to set the storage means to a non-operating mode when no data requests are pending and set the storage means to an operating mode when at least one of the data requests is to be executed.
2. The device of claim 1 wherein the data requests further comprise streaming requests (205) regarding real time information; and the request scheduling unit is adapted to determine a predicted spin up time to satisfy the streaming requests and to mark the auxiliary requests for which the predetermined timeout parameter is smaller than the time until the next predicted spin up time as data requests to be executed.
3. The device of claim 1 further comprising a timeout expiration detection unit (700) communicatively coupled to the request scheduling unit, the timeout expiration detection unit being adapted to communicate an indication (710) that the predetermined timeout parameter (515) has expired for at least one of the auxiliary requests; and wherein. the request scheduling unit is adapted to mark auxiliary requests for which the indication is received as data requests to be executed.
4. The device of claim 2 wherein the timeout assignment unit (510) is further adapted to assign an execute immediately value to the predetermined timeout parameter should the priority be determined as high.
5. The device of claim 4 wherein the request scheduling unit (320) is further adapted to mark the auxiliary requests that are to be immediately executed and any pending streaming requests as data requests to be executed.
6. The device of claim 1 wherein the timeout assignment unit (510) is further adapted to assign an infinite timeout value to the predetermined timeout parameter should the priority be determined as low.
7. The device of claim 6 wherein the request scheduling unit (320) is further adapted to mark the auxiliary requests that have been assigned an infinite timeout value as data requests to be executed when the mode of the storage means is set to an operating mode.
8. The device of claim 1 wherein the timeout assignment unit (510) is further adapted to assign an intermediate timeout value, between an execute immediately value and an infinite timeout value, to the predetermined timeout parameter should the priority be determined as intermediate between a low and high.
9. The device of claim 2 wherein the device further comprises a buffer (110) for temporarily storing the information; and wherein the request scheduling unit is further adapted to mark the streaming requests as requests to be executed based upon a filling level of the buffer.
10. The device of claim 1 wherein the request scheduling unit is further adapted to: read additional data immediately following the information; store the additional data; and in the event of receiving a next auxiliary request requesting the additional data providing the additional data.
11. The device of claim 1 wherein the request scheduling unit (320) is further adapted to mark all pending data requests as data requests to be executed when the storage means (170) is set to an operating mode.
12. The device of claim 1 realized as at least one of the group consisting of: a Set-Top-Box device; a digital video recording device; a network-enabled device; a conditional access system; a portable audio player; a portable video player; a mobile phone; a DVD player; a CD player; a hard disk based media player; an Internet radio device; a computer; a television; a public entertainment device; and an MP3 player.
13. A method for power scheduling of data requests (205 , 215), the data requests comprising auxiliary requests (215) regarding auxiliary information, the method comprising the steps of: receiving (800) the data requests defining information to be stored and/or retrieved; determining (810) a priority (315) of each one of the auxiliary requests; assigning (820) a predetermined timeout parameter (515) to each one of the auxiliary requests dependent upon the priority determined; determining (830) the data requests, if any, that are to be executed based upon the predetermined timeout parameter; and setting (880) the storage means to a non-operating mode when no data requests are pending and setting (860) the storage means to an operating mode when at least one of the data requests is to be executed.
14. The method of claim 13 wherein the data requests further comprise streaming requests (205) regarding real time information; and the method step of determining the data requests, if any, that are to be executed further comprises the method steps of: determining (900) a predicted spin up time to satisfy the streaming requests; and marking (910) the auxiliary requests for which the predetermined timeout parameter is smaller than the time until the next predicted spin up time as data requests to be executed.
15. The method of claim 13 further comprising the method steps of: detecting (1030) that the predetermined timeout parameter has expired for at least one of the auxiliary requests; and marking (910) auxiliary requests for which it has been detected that the predetermined timeout parameter has expired as data requests to be executed.
16. A program element directly loadable into the memory of a programmable device, comprising software code portions for performing, when said program element is run on the device, the method steps of: receiving (800) data requests (205, 215) defining information to be stored and/or retrieved, the data requests comprising auxiliary requests (215) regarding auxiliary information; determining (810) a priority (315) of each one of the auxiliary requests; assigning (820) a predetermined timeout parameter (515) to each one of the auxiliary requests dependent upon the priority determined; determining (830) the data requests, if any, that are to be executed based upon the predetermined timeout parameter; and setting (880) the storage means to a non-operating mode when no data requests are pending and setting (860) the storage means to an operating mode when at least one of the data requests is to be executed.
17. A computer-readable medium directly loadable into the memory of a programmable device, comprising software code portions for performing, when said code portions are run on the device, the method steps of: receiving (800) data requests (205, 215) defining information to be stored and/or retrieved, the data requests comprising auxiliary requests (215) regarding auxiliary information; determining (810) a priority (315) of each one of the auxiliary requests; assigning (820) a predetermined timeout parameter (515) to each one of the auxiliary requests dependent upon the priority determined; determining (830) the data requests, if any, that are to be executed based upon the predetermined timeout parameter; and setting (880) the storage means to a non-operating mode when no data requests are pending and setting (860) the storage means to an operating mode when at least one of the data requests is to be executed.
PCT/IB2007/051094 2006-04-03 2007-03-28 A device and a method for power scheduling of data requests WO2007113744A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP06112156.2 2006-04-03
EP06112156 2006-04-03

Publications (1)

Publication Number Publication Date
WO2007113744A1 true WO2007113744A1 (en) 2007-10-11

Family

ID=38329569

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2007/051094 WO2007113744A1 (en) 2006-04-03 2007-03-28 A device and a method for power scheduling of data requests

Country Status (1)

Country Link
WO (1) WO2007113744A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108540575A (en) * 2018-04-27 2018-09-14 北京奇艺世纪科技有限公司 A kind of network request dispatching method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004066293A1 (en) * 2003-01-17 2004-08-05 Koninklijke Philips Electronics N.V. Power efficient scheduling for disc accesses

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004066293A1 (en) * 2003-01-17 2004-08-05 Koninklijke Philips Electronics N.V. Power efficient scheduling for disc accesses

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PAPATHANASIOU A E ET AL: "Energy efficiency through burstiness", MOBILE COMPUTING SYSTEMS AND APPLICATIONS, 2003. PROCEEDINGS. FIFTH IEEE WORKSHOP ON 9-10 OCT. 2003, PISCATAWAY, NJ, USA,IEEE, 9 October 2003 (2003-10-09), pages 44 - 53, XP010662874, ISBN: 0-7695-1995-4 *
PAPATHANASIOU A. E., SCOTT M. L.: "Increasing Disk Burstiness for Energy Efficiency", UNIVERSITY OF ROCHESTER, TECHNICAL REPORT 792, November 2002 (2002-11-01), Rochester, NY, USA, XP002446871, Retrieved from the Internet <URL:http://www.cogsci.rochester.edu/~papathan/papers/2002-URCS-TR-792-Bursty/2002-urcs-tr792.pdf> [retrieved on 20070815] *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108540575A (en) * 2018-04-27 2018-09-14 北京奇艺世纪科技有限公司 A kind of network request dispatching method and device
CN108540575B (en) * 2018-04-27 2021-07-20 北京奇艺世纪科技有限公司 Network request scheduling method and device

Similar Documents

Publication Publication Date Title
US11520496B2 (en) Electronic device, computer system, and control method
US20190155770A1 (en) Deferred inter-processor interrupts
WO2018082570A1 (en) I/o request scheduling method and device
TWI472914B (en) Hard disk drive,hard drive assembly and laptop computer with removable non-volatile semiconductor memory module,and hard disk controller integrated circuit for non-volatile semiconductor memory module removal detection
US7584312B2 (en) Data processing apparatus having improved buffer management
EP3411775B1 (en) Forced idling of memory subsystems
JP2009543172A (en) Apparatus and method for managing power consumption of multiple data processing units
US9411649B2 (en) Resource allocation method
US10503238B2 (en) Thread importance based processor core parking and frequency selection
JP2008511915A (en) Context-based power management
US8341437B2 (en) Managing power consumption and performance in a data storage system
US20230199049A1 (en) Modifying content streaming based on device parameters
JP2002099433A (en) System of computing processing, control method system for task control, method therefor and record medium
CN111722697B (en) Interrupt processing system and interrupt processing method
US11010094B2 (en) Task management method and host for electronic storage device
WO2012109961A1 (en) Method and device for allocating browser process
US20160117269A1 (en) System and method for providing universal serial bus link power management policies in a processor environment
CN110795323A (en) Load statistical method, device, storage medium and electronic equipment
WO2007113744A1 (en) A device and a method for power scheduling of data requests
US10884477B2 (en) Coordinating accesses of shared resources by clients in a computing device
WO2015052823A1 (en) Cloud management device, method for managing same, and system thereof
JP2012008767A (en) Processor, reproducing device, and processing device
Park et al. Hardware‐Aware Rate Monotonic Scheduling Algorithm for Embedded Multimedia Systems
TW201626154A (en) Hard disk device and method for decreasing power consumption
KR20050043098A (en) A low power data storage system and a method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07735295

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07735295

Country of ref document: EP

Kind code of ref document: A1