WO2006126143A2 - Caching for low power consumption grid peer-to-peer devices - Google Patents

Caching for low power consumption grid peer-to-peer devices Download PDF

Info

Publication number
WO2006126143A2
WO2006126143A2 PCT/IB2006/051559 IB2006051559W WO2006126143A2 WO 2006126143 A2 WO2006126143 A2 WO 2006126143A2 IB 2006051559 W IB2006051559 W IB 2006051559W WO 2006126143 A2 WO2006126143 A2 WO 2006126143A2
Authority
WO
WIPO (PCT)
Prior art keywords
data
cache
upload
download
upload data
Prior art date
Application number
PCT/IB2006/051559
Other languages
French (fr)
Other versions
WO2006126143A3 (en
Inventor
Henricus Adrianus Gerardus Vlemmix
Keith Baker
Erick Martijn Van Rijk
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2006126143A2 publication Critical patent/WO2006126143A2/en
Publication of WO2006126143A3 publication Critical patent/WO2006126143A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/126Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1028Power efficiency
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1032Reliability improvement, data loss prevention, degraded operation etc
    • G06F2212/1036Life time enhancement
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • This invention pertains in general to the field of data processing in a consumer device. More particularly the invention relates a method and system for caching data in a consumer device which contains a hard drive, wherein the caching method increases the life expectancy of the hard drive while controlling power consumption by managing hard disk access when used in conjunction with a P2P network which places social demands on the process of distribution of data.
  • PC personal computer
  • CE another device
  • PC hardware is not fit for use in a consumer electronic device in the home environment because of the life expectancy of the consumer electronic device.
  • the life expectancy of consumer electronic devices is a multiple of that of a PC. For example, the life span of a PC is 3 years while the life expectancy of a television is 8 years.
  • PC hardware that may be beneficial to add to a consumer electronic device
  • a hard disk drive (HDD).
  • a HDD in a PC might crash after 2 years, but it is expected that when this happens the user of the PC will have backups of their data.
  • backups are not that common because both the life expectancy is greater than that of PC hardware, and user- background, behaviour, and guarantee expectations are different, resulting in higher cost of quality in case of failure.
  • Cache memories are well known in the art. They serve to bridge the gap between the operating speed of a processor and that of a main memory.
  • the cache memory has faster access timing than the main memory.
  • the processor uses (or is expected to use) data from the main memory, a copy of that data is stored in the cache memory.
  • Cache memory is generally smaller than the main memory.
  • data from the main memory location is to be stored in cache memory and no free cache memory is available, data that is in the cache from another main memory location will have to be overwritten.
  • a cache management unit selects which cache location will be used depending on the cache replacement strategy. Unfortunately, current cache replacement strategies do not overcome the problems cited above with respect to using PC hardware in consumer electronic devices running P2P content distribution applications.
  • the present invention preferably seeks to mitigate, alleviate or eliminate one or more of the above-identified deficiencies in the art and disadvantages singly or in any combination and solves at least the above mentioned problems by providing a system, a method, a processor, and a computer-readable medium that dynamically allocates cache in a consumer electronic device while increasing the life expectancy of a hard drive in the consumer electronic device while controlling power consumption by managing access to the disk drive according to the appended patent claims.
  • the general solution according to the invention is to implement a large cache for temporary storage of incoming and outgoing data, wherein the hard disk is temporarily disabled when it is not needed. Furthermore, the cache is proactively filled by predicting the data needed for the outgoing traffic.
  • a method for dynamic allocation of a cache in a device comprising the steps of: loading the cache with upload data, overwriting upload data with download data from an external network when a download request is started; and flushing the cache to a memory when the cache is full of download data.
  • a processor comprising: a cache having a plurality of data blocks; means for loading the cache with upload data, means for overwriting upload data with download data from an external network when a download request is started; means for flushing the cache to a memory when the cache is full of download data, said means being operatively connected to each other.
  • a system comprising: a memory device; a cache coupled to the memory device, the cache having a plurality of data blocks; a processor for loading the cache with upload data, overwriting upload data with download data from an external network when a download request is started, and flushing the cache to a memory when the cache is full of download data.
  • a computer-readable medium having embodied thereon a computer program for processing by a computer is provided.
  • the computer program comprises a first code segment for loading the cache with upload data, a second code segment for overwriting upload data with download data from an external network when a download request is started; a third code segment for flushing the cache to a memory when the cache is full of download data.
  • the present invention has the advantage over the prior art that it increases the life expectancy of a hard disk while also controlling the power consumption by the hard disk by managing hard disk access.
  • Fig. 1 illustrates a consumer electronic device and a network according to one embodiment of the invention
  • Fig. 2 is a flow chart illustrating a download operation in the cache according to one embodiment of the invention
  • Fig. 3 illustrates a system according to one embodiment of the invention
  • Fig. 4 illustrates a processing system according to one embodiment of the invention.
  • Fig. 5 illustrates a computer-readable medium according to one embodiment of the invention.
  • the invention discloses a way to implement P2P applications in consumer electronic devices, but specific to the goals of consumer product in terms of energy consumption and system reliability for storage on disk.
  • the invention keeps in mind the specific requirements for consumer electronic devices like life expectancy and low power consumption.
  • the invention implements the following operations. First, the hard disk is stopped when not needed. To increase the time that the hard disk is not needed, the incoming and outgoing data is cached in memory. This is possible because there are no strict time restrictions on cache-reads due to the more time- independent nature of peer-to-peer downloading as compared to more classic usage of disk storage. Because of the seemingly random, but partially predictable, nature of requested outgoing data, the needed contents of the cache are predicted. Predicting the needed contents for the cache can be done by interpreting the statistical information that can be retrieved from the P2P network.
  • Fig. 1 illustrates a block diagram of a consumer electronic device and P2P network according to one embodiment of the invention.
  • the consumer electronic device may comprise, for example, a television, set-top box, radio receiver, video cassette recorder, a CD or DVD player/recorder, etc. While the foregoing are described generically herein as consumer electronic devices, it will be recognized that this list is not limiting and that other devices may be utilized.
  • a consumer electronic device 100 is connected to a network 102, e.g. a P2P network.
  • the consumer electronic device 100 comprises, among other feature not illustrated, a network interface 103, a processor 104, a cache management unit 105, a cache 106 and a hard disk memory 107.
  • the consumer electronic device 100 communicates with the network 102 through the network interface 103. Data can be transmitted to and sent from the consumer device 100 through the network interface 103.
  • the processor 104 controls the operation of the consumer electronic device 100. It will be understood that the processor 104 may be any type of processor.
  • the processor 104 is connected to a cache management unit 105 which are both responsible for governing the interaction between the cache 106 and the memory 107.
  • processor 104 While the processor 104, cache management unit 105 and the cache 106 are depicted as separate units, it will be understood that the cache management unit and the cache may be part of the processor and the invention is not limited thereto.
  • the caching operation in the consumer electronic device will now be described below.
  • a download scenario from the network 102 to the consumer electronic device will now be explained.
  • the processor 104 and the cache management unit 105 control the operation of the cache 106 and the memory 107. If a new download begins, the cache cannot be pre-filled with data initially. In case a download is resumed, data is already available. In this case the cache is filled with upload requested data (Ur) which is data that has been requested by the network 102, and if there is any free cache memory available, generic upload data (U). As will be explained in more detail below, the generic data is actually data that the processor 104 has predicted will be needed in the cache. The processor 104 can predict the needed contents for the cache by interpreting statistical information that can be retrieved from the network 102.
  • the generic upload data is overwritten by the download data in steps 201 and 203.
  • the network 105 does not need the generic data at this time so it is overwritten by the download data, which in turn becomes available to the network 102.
  • This dynamic behaviour results in less disk activity by overwriting the unneeded upload data with the download data.
  • the upload data size can be decreased in the cache 106 to the point where only the requested upload data remains. The freed memory can now be used to increase the space for the downloaded data.
  • the download data begins to include requested data (Dr) which is requested by peers in the network 102.
  • requested data Dr
  • the download request data (Dr) is also used as upload data as will be explained below.
  • the upload requested data then begins to be overwritten as illustrated in step 207.
  • a reference to the requested upload data that is overwritten in the cache may be queued for later retrieval at the next cache flush.
  • the upload request queue is a limited set of references, thus taking far less memory space than the actual referenced data, to data blocks that are requested by peers from the network, but were not available in cache at the time of request.
  • a request for data is done that is not available in the cache, a reference to this data is stored in the request queue.
  • the request queue is used to determine what data blocks are used to fill the cache memory (Ur) with.
  • the downloaded data continues to overwrite all of the upload data (both generic and requested) until the cache is full. Since no new incoming data can be written into the cache, a cache flush is necessary. During the cache flush, all of the downloaded data is written to the memory 107.
  • the cache is filled with the data referenced in the upload request queue and supplemented with the download request data (which is now called upload request data if any). If there is any free space remaining in the cache 106, generic upload data is loaded into the cache to fill the cache as illustrated in step 211.
  • the cache is flushed when the download operation ends.
  • the system starts the upload operation using the same upload request queue as during the download operation.
  • the cache is filled with upload requested data and there is no download active, the upload data in the cache 305 can be uploaded to the network 301. Assuming a high upload speed of 500 KB/sec for the network interface 303 and a cache capacity of 512 MB, it will take at least 17 minutes to completely send the contents of the cache 305 to the network 301.
  • the memory 307 is a magnitude in size larger than the cache 305, and the cache 305 is a magnitude larger than the upload capacity.
  • New incoming requests are put in a queue 309 to be handled when the next cache update occurs.
  • the priority of the filling of the cache is based on unfinished uploads first and the contents of the upload request queue 309. The queue will be handled on a last call, first served basis, as an older queued request is more likely to already be handled by another peer in the network.
  • Figure 4 illustrates an exemplary processing system 200 according to a further embodiment of the invention.
  • the processing system 400 comprises means 401 for loading the cache with upload data, means 403 for overwriting upload data with download data when a download request is started; means 405 for flushing the cache to a memory when the cache is full of download data.
  • the processing system also comprises means 407 for queuing a reference to the requested upload data that is overwritten by download data for retrieval at next cache flush, means 409 for uploading the upload data when no data is being downloaded, and means 411 for filling the cache when the cache is empty with upload data from the memory, wherein upload data for unfinished uploads is first loaded into the cache followed by data referenced in an upload request queue and then generic upload data, said means being operatively connected to each other.
  • a computer- readable medium is illustrated schematically.
  • a computer-readable medium 500 has embodied thereon a computer program 510 for processing by a computer 513, the computer program comprising code segments for dynamically allocating a cache.
  • the computer program comprises a code segment 515 for loading the cache with upload data, a code segment 517 for overwriting upload data with download data when a download request is started, a code segment 519 for flushing the cache to a memory when the cache is full of download data.
  • the computer program also comprises a code segment 521 for queuing a reference to the requested upload data that is overwritten by download data for retrieval at next cache flush, a code segment 523 for uploading the upload data when no data is being downloaded, and a code segment 525 for filling the cache when the cache is empty with upload data from the memory, wherein upload data for unfinished uploads is first loaded into the cache followed by data referenced in an upload request queue and then generic upload data.
  • the invention can be implemented in any suitable form including hardware, software, firmware or any combination of these.
  • the invention may be implemented as computer software running on one or more data processors and/or digital signal processors.
  • the elements and components of an embodiment of the invention may be physically, functionally and logically implemented in any suitable way. Indeed, the functionality may be implemented in a single unit, in a plurality of units or as part of other functional units. As such, the invention may be implemented in a single unit, or may be physically and functionally distributed between different units and processors.

Abstract

A method and system for dynamic allocation of a cache in a device is disclosed. First the cache is loaded with upload data. When a download request is started, the upload data is overwritten with download data from an external network. The cache is then flushed to a memory when the cache is full of download data. By implementing a large cache for temporary storage of incoming and outgoing data, the cache is proactively filled by predicting the data needed for the outgoing traffic. In this way a hard disk of the system may temporarily be disabled when it is not needed. Hence life expectancy of hard disks in consumer electronic devices may be increased while managing hard disk access to control power consumption by the hard disk.

Description

CACHING FOR LOW POWER CONSUMPTION GRID PEER-TO-PEER DEVICES
This invention pertains in general to the field of data processing in a consumer device. More particularly the invention relates a method and system for caching data in a consumer device which contains a hard drive, wherein the caching method increases the life expectancy of the hard drive while controlling power consumption by managing hard disk access when used in conjunction with a P2P network which places social demands on the process of distribution of data.
Current peer-to-peer (P2P) networks only work as software on personal computer (PC) hardware, this arises from the historical development of P2P networks which has been PC oriented. However, it may be advantageous in the future to allow peer-to-peer networks to communicate with consumer electronic devices. PC hardware and consumer electronic devices have different demands and expectation both from the consumer and the government. Thus an application originally developed for one device (PC) will have different demands when ported to another device (CE). Currently, the PC hardware is not fit for use in a consumer electronic device in the home environment because of the life expectancy of the consumer electronic device. The life expectancy of consumer electronic devices is a multiple of that of a PC. For example, the life span of a PC is 3 years while the life expectancy of a television is 8 years. Therefore if one wants to use PC hardware in a consumer electronic home appliance, one needs to take extra measures to increase the life expectancy of the PC hardware. One example of PC hardware that may be beneficial to add to a consumer electronic device is a hard disk drive (HDD). A HDD in a PC might crash after 2 years, but it is expected that when this happens the user of the PC will have backups of their data. In consumer electronic devices, backups are not that common because both the life expectancy is greater than that of PC hardware, and user- background, behaviour, and guarantee expectations are different, resulting in higher cost of quality in case of failure. Thus, there is a need for extending the life of a HDD in a consumer electronic device, such as a PVR. Furthermore, new consumer electronic energy standards restrict the energy consumption of consumer electronic devices when they are in the stand-by mode. For example, the California Energy Commission (CEC) restricts power consumption in stand-by mode to 3 watts for televisions and DVD players/recorders. These power consumption restrictions on consumer electronic devices may be a problem for any PC hardware added to the PC device, since PC hardware is not under the same power consumption restrictions. For example, if a HDD is added to a consumer electronic device, the power consumption of the HDD will need to be controlled so that the consumer electronic device will not violate the power consumption restrictions. One way to control the power consumption of the HDD in a consumer device is to control the access to the HDD. Thus, there is a need for a method of managing hard drive access to increase the overall power efficiency of a consumer device containing the HDD.
One method for controlling access to a hard disk is to use a cache memory. Cache memories are well known in the art. They serve to bridge the gap between the operating speed of a processor and that of a main memory. The cache memory has faster access timing than the main memory. When the processor uses (or is expected to use) data from the main memory, a copy of that data is stored in the cache memory. Thus, when the processor needs the data, the data can be fetched from the cache memory faster than it could be fetched from the main memory. Cache memory is generally smaller than the main memory. When data from the main memory location is to be stored in cache memory and no free cache memory is available, data that is in the cache from another main memory location will have to be overwritten. A cache management unit selects which cache location will be used depending on the cache replacement strategy. Unfortunately, current cache replacement strategies do not overcome the problems cited above with respect to using PC hardware in consumer electronic devices running P2P content distribution applications.
Thus, there is a need for a new cache replacement method for dynamically allocating cache which increases life expectancy of hard disks in consumer electronic devices while managing hard disk access to control power consumption by the hard disk. Accordingly, the present invention preferably seeks to mitigate, alleviate or eliminate one or more of the above-identified deficiencies in the art and disadvantages singly or in any combination and solves at least the above mentioned problems by providing a system, a method, a processor, and a computer-readable medium that dynamically allocates cache in a consumer electronic device while increasing the life expectancy of a hard drive in the consumer electronic device while controlling power consumption by managing access to the disk drive according to the appended patent claims.
The general solution according to the invention is to implement a large cache for temporary storage of incoming and outgoing data, wherein the hard disk is temporarily disabled when it is not needed. Furthermore, the cache is proactively filled by predicting the data needed for the outgoing traffic.
According to one aspect of the invention, a method for dynamic allocation of a cache in a device is disclosed, wherein the method comprising the steps of: loading the cache with upload data, overwriting upload data with download data from an external network when a download request is started; and flushing the cache to a memory when the cache is full of download data.
According to another aspect of the invention, a processor is disclosed, wherein the processor comprises: a cache having a plurality of data blocks; means for loading the cache with upload data, means for overwriting upload data with download data from an external network when a download request is started; means for flushing the cache to a memory when the cache is full of download data, said means being operatively connected to each other.
According to another aspect of the invention, a system is disclosed, wherein the system comprises: a memory device; a cache coupled to the memory device, the cache having a plurality of data blocks; a processor for loading the cache with upload data, overwriting upload data with download data from an external network when a download request is started, and flushing the cache to a memory when the cache is full of download data. According to a further aspect of the invention, a computer-readable medium having embodied thereon a computer program for processing by a computer is provided. The computer program comprises a first code segment for loading the cache with upload data, a second code segment for overwriting upload data with download data from an external network when a download request is started; a third code segment for flushing the cache to a memory when the cache is full of download data.
The present invention has the advantage over the prior art that it increases the life expectancy of a hard disk while also controlling the power consumption by the hard disk by managing hard disk access.
These and other aspects, features and advantages of which the invention is capable of will be apparent and elucidated from the following description of embodiments of the present invention, reference being made to the accompanying drawings, in which
Fig. 1 illustrates a consumer electronic device and a network according to one embodiment of the invention; Fig. 2 is a flow chart illustrating a download operation in the cache according to one embodiment of the invention;
Fig. 3 illustrates a system according to one embodiment of the invention;
Fig. 4 illustrates a processing system according to one embodiment of the invention; and
Fig. 5 illustrates a computer-readable medium according to one embodiment of the invention.
The following description focuses on an embodiment of the present invention applicable to a cache management system and in particular to a cache management system in a consumer electronic device. However, it will be appreciated that the invention is not limited to this application but may be applied to many other devices as well.
Briefly, the invention discloses a way to implement P2P applications in consumer electronic devices, but specific to the goals of consumer product in terms of energy consumption and system reliability for storage on disk. The invention keeps in mind the specific requirements for consumer electronic devices like life expectancy and low power consumption. To increase life expectancy and energy savings, the invention implements the following operations. First, the hard disk is stopped when not needed. To increase the time that the hard disk is not needed, the incoming and outgoing data is cached in memory. This is possible because there are no strict time restrictions on cache-reads due to the more time- independent nature of peer-to-peer downloading as compared to more classic usage of disk storage. Because of the seemingly random, but partially predictable, nature of requested outgoing data, the needed contents of the cache are predicted. Predicting the needed contents for the cache can be done by interpreting the statistical information that can be retrieved from the P2P network.
Fig. 1 illustrates a block diagram of a consumer electronic device and P2P network according to one embodiment of the invention. The consumer electronic device may comprise, for example, a television, set-top box, radio receiver, video cassette recorder, a CD or DVD player/recorder, etc. While the foregoing are described generically herein as consumer electronic devices, it will be recognized that this list is not limiting and that other devices may be utilized.
In Fig. 1, a consumer electronic device 100 is connected to a network 102, e.g. a P2P network. The consumer electronic device 100 comprises, among other feature not illustrated, a network interface 103, a processor 104, a cache management unit 105, a cache 106 and a hard disk memory 107. In operation, the consumer electronic device 100 communicates with the network 102 through the network interface 103. Data can be transmitted to and sent from the consumer device 100 through the network interface 103. The processor 104 controls the operation of the consumer electronic device 100. It will be understood that the processor 104 may be any type of processor. The processor 104 is connected to a cache management unit 105 which are both responsible for governing the interaction between the cache 106 and the memory 107. While the processor 104, cache management unit 105 and the cache 106 are depicted as separate units, it will be understood that the cache management unit and the cache may be part of the processor and the invention is not limited thereto. The caching operation in the consumer electronic device will now be described below.
With reference to Fig. 2, a download scenario from the network 102 to the consumer electronic device will now be explained. As mentioned above, the processor 104 and the cache management unit 105 control the operation of the cache 106 and the memory 107. If a new download begins, the cache cannot be pre-filled with data initially. In case a download is resumed, data is already available. In this case the cache is filled with upload requested data (Ur) which is data that has been requested by the network 102, and if there is any free cache memory available, generic upload data (U). As will be explained in more detail below, the generic data is actually data that the processor 104 has predicted will be needed in the cache. The processor 104 can predict the needed contents for the cache by interpreting statistical information that can be retrieved from the network 102. Not only local statistics will be used to build up a list of needed contents of the cache but also the expected need of the network will be taken into account. For example, prediction of generic upload data (U) can be performed by analysis of the known Ur, the upload request queue and the statistics that are available on the P2P network: number of peers uploading, number of peers downloading, number of available completed copies, and freshness and uniqueness of the data.
Returning to Fig. 2, as the download data (D) starts to fill the cache 105, the generic upload data is overwritten by the download data in steps 201 and 203. The network 105 does not need the generic data at this time so it is overwritten by the download data, which in turn becomes available to the network 102. This dynamic behaviour results in less disk activity by overwriting the unneeded upload data with the download data. Because of the limited upload capacity, the upload data size can be decreased in the cache 106 to the point where only the requested upload data remains. The freed memory can now be used to increase the space for the downloaded data.
In step 205, the download data begins to include requested data (Dr) which is requested by peers in the network 102. Note that the download request data (Dr) is also used as upload data as will be explained below. Once all of the generic upload data has been overwritten, the upload requested data then begins to be overwritten as illustrated in step 207. A reference to the requested upload data that is overwritten in the cache may be queued for later retrieval at the next cache flush.
The upload request queue is a limited set of references, thus taking far less memory space than the actual referenced data, to data blocks that are requested by peers from the network, but were not available in cache at the time of request. When a request for data is done that is not available in the cache, a reference to this data is stored in the request queue. At the time of cache filling after the cache flush, the request queue is used to determine what data blocks are used to fill the cache memory (Ur) with. As illustrated in step 209, the downloaded data continues to overwrite all of the upload data (both generic and requested) until the cache is full. Since no new incoming data can be written into the cache, a cache flush is necessary. During the cache flush, all of the downloaded data is written to the memory 107. In addition, the cache is filled with the data referenced in the upload request queue and supplemented with the download request data (which is now called upload request data if any). If there is any free space remaining in the cache 106, generic upload data is loaded into the cache to fill the cache as illustrated in step 211.
According to the invention, the cache is flushed when the download operation ends. When the download operation ends and no further download operation is necessary, the system starts the upload operation using the same upload request queue as during the download operation.
An upload operation to a network 301 will now be described with reference to Fig. 3. Because the cache is filled with upload requested data and there is no download active, the upload data in the cache 305 can be uploaded to the network 301. Assuming a high upload speed of 500 KB/sec for the network interface 303 and a cache capacity of 512 MB, it will take at least 17 minutes to completely send the contents of the cache 305 to the network 301. The memory 307 is a magnitude in size larger than the cache 305, and the cache 305 is a magnitude larger than the upload capacity. New incoming requests are put in a queue 309 to be handled when the next cache update occurs. The priority of the filling of the cache is based on unfinished uploads first and the contents of the upload request queue 309. The queue will be handled on a last call, first served basis, as an older queued request is more likely to already be handled by another peer in the network.
Figure 4 illustrates an exemplary processing system 200 according to a further embodiment of the invention. According to the embodiment, the processing system 400 comprises means 401 for loading the cache with upload data, means 403 for overwriting upload data with download data when a download request is started; means 405 for flushing the cache to a memory when the cache is full of download data. The processing system also comprises means 407 for queuing a reference to the requested upload data that is overwritten by download data for retrieval at next cache flush, means 409 for uploading the upload data when no data is being downloaded, and means 411 for filling the cache when the cache is empty with upload data from the memory, wherein upload data for unfinished uploads is first loaded into the cache followed by data referenced in an upload request queue and then generic upload data, said means being operatively connected to each other. In another embodiment of the invention according to Fig. 5, a computer- readable medium is illustrated schematically. A computer-readable medium 500 has embodied thereon a computer program 510 for processing by a computer 513, the computer program comprising code segments for dynamically allocating a cache. The computer program comprises a code segment 515 for loading the cache with upload data, a code segment 517 for overwriting upload data with download data when a download request is started, a code segment 519 for flushing the cache to a memory when the cache is full of download data. The computer program also comprises a code segment 521 for queuing a reference to the requested upload data that is overwritten by download data for retrieval at next cache flush, a code segment 523 for uploading the upload data when no data is being downloaded, and a code segment 525 for filling the cache when the cache is empty with upload data from the memory, wherein upload data for unfinished uploads is first loaded into the cache followed by data referenced in an upload request queue and then generic upload data.
The invention can be implemented in any suitable form including hardware, software, firmware or any combination of these. However, the invention may be implemented as computer software running on one or more data processors and/or digital signal processors. The elements and components of an embodiment of the invention may be physically, functionally and logically implemented in any suitable way. Indeed, the functionality may be implemented in a single unit, in a plurality of units or as part of other functional units. As such, the invention may be implemented in a single unit, or may be physically and functionally distributed between different units and processors.
Although the present invention has been described above with reference to (a) specific embodiment(s), it is not intended to be limited to the specific form set forth herein. Rather, the invention is limited only by the accompanying claims and, other embodiments than the specific above are equally possible within the scope of these appended claims, e.g. different processing systems than those described above.
In the claims, the term "comprises/comprising" does not exclude the presence of other elements or steps. Furthermore, although individually listed, a plurality of means, elements or method steps may be implemented by e.g. a single unit or processor.
Additionally, although individual features may be included in different claims, these may possibly advantageously be combined, and the inclusion in different claims does not imply that a combination of features is not feasible and/or advantageous. In addition, singular references do not exclude a plurality. The terms "a", "an", "first", "second" etc do not preclude a plurality. Reference signs in the claims are provided merely as a clarifying example and shall not be construed as limiting the scope of the claims in any way.

Claims

1. A method for dynamic allocation of a cache in a device, comprising: loading the cache with upload data, overwriting upload data with download data from an external network when a download request is started; flushing the cache to a memory when the cache is full of download data.
2. The method according to claim 1, wherein said external network is a grid- distribution P2P network.
3. The method according to claim 1, wherein said upload data is comprised of generic upload data and upload requested data.
4. The method according to claim 3, comprising overwriting all of the generic upload data by download data in said cache before overwriting the upload requested data by the download data in said cache.
5. The method according to claim 3, wherein the generic upload data is data that is predicted to be needed.
6. The method according to claim 5, wherein the predicted generic upload data is selected by interpreting statistical information retrieved from an external network.
7. The method according to claim 5, wherein the predicted generic upload data is selected by analyzing the known requested upload data, an upload request queue and statistical information retrieved from an external network.
8. The method according to claim 4, further comprising: queuing a reference to requested upload data that is overwritten by download data for retrieval at next cache flush.
9. The method according to claim 1, further comprising: uploading the upload data when no data is being downloaded.
10. The method according to claim 1, further comprising: filling the cache when the cache is empty with upload data from the memory, wherein upload data for unfinished uploads is first loaded into the cache followed by data referenced in an upload request queue and then generic upload data.
11. A processor, comprising: a cache (106) having a plurality of data blocks; means (401) for loading the cache with upload data, means (403) for overwriting upload data with download data from an external network when a download request is started; means (405) for flushing the cache to a memory when the cache is full of download data, said means being operatively connected to each other.
12. The processor according to claim 11, wherein said processor is located in a consumer electronic device (100).
13. A system, comprising: a memory device (107); a cache (106) coupled to the memory device, the cache having a plurality of data blocks; a processor (104) configured for loading the cache with upload data, overwriting upload data with download data from an external network when a download request is started, and flushing the cache to a memory when the cache is full of download data.
14. The system according to claim 13, wherein said system is located in a consumer electronic device (100).
15. A computer-readable medium having embodied thereon a computer program for processing by a computer, the computer program comprising code segments for dynamically allocating a cache, comprising: a first code segment (515) for loading the cache with upload data, a second code segment (517) for overwriting upload data with download data from an external network when a download request is started; a third code segment (519) for flushing the cache to a memory when the cache is full of download data.
16. The computer readable medium according to claim 15, further comprising: a fourth code segment (521) for queuing a reference to the requested upload data that is overwritten by download data for retrieval at next cache flush; a fifth code segment (523) for uploading the upload data when no data is being downloaded; and a sixth code segment (525) for filling the cache when the cache is empty with upload data from the memory, wherein upload data for unfinished uploads is first loaded into the cache followed by data referenced in an upload request queue and then generic upload data.
PCT/IB2006/051559 2005-05-24 2006-05-17 Caching for low power consumption grid peer-to-peer devices WO2006126143A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP05300401.6 2005-05-24
EP05300401 2005-05-24

Publications (2)

Publication Number Publication Date
WO2006126143A2 true WO2006126143A2 (en) 2006-11-30
WO2006126143A3 WO2006126143A3 (en) 2007-04-12

Family

ID=37192639

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/051559 WO2006126143A2 (en) 2005-05-24 2006-05-17 Caching for low power consumption grid peer-to-peer devices

Country Status (1)

Country Link
WO (1) WO2006126143A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010133599A1 (en) * 2009-05-20 2010-11-25 Institut für Rundfunktechnik GmbH Peer-to-peer transmission system for data streams
CN102403008A (en) * 2010-09-17 2012-04-04 北京中星微电子有限公司 Method and system for carrying out breakpoint continuous connection on data stream in audio playing process and FIFO (First In First Out) controller

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GIUSEPPE BIANCHI ET AL: "The Role of Local Storage in Supporting Video Retrieval Services on ATM Networks" IEEE / ACM TRANSACTIONS ON NETWORKING, IEEE / ACM, NEW YORK, NY, US, vol. 5, no. 6, December 1997 (1997-12), XP011039102 ISSN: 1063-6692 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010133599A1 (en) * 2009-05-20 2010-11-25 Institut für Rundfunktechnik GmbH Peer-to-peer transmission system for data streams
US8990417B2 (en) 2009-05-20 2015-03-24 Institut Fur Rundfunktechnik Gmbh Peer-to-peer transmission system for data streams
CN102403008A (en) * 2010-09-17 2012-04-04 北京中星微电子有限公司 Method and system for carrying out breakpoint continuous connection on data stream in audio playing process and FIFO (First In First Out) controller

Also Published As

Publication number Publication date
WO2006126143A3 (en) 2007-04-12

Similar Documents

Publication Publication Date Title
KR101726824B1 (en) Efficient Use of Hybrid Media in Cache Architectures
US10416932B2 (en) Dirty data management for hybrid drives
US20150058522A1 (en) Detection of hot pages for partition migration
US8578089B2 (en) Storage device cache
JP2003533843A (en) How to improve bandwidth efficiency
JP2004241093A (en) Information processor and method for controlling electric power consumption
CN109164981B (en) Disk management method, device, storage medium and equipment
US20150058521A1 (en) Detection of hot pages for partition hibernation
JP2009070363A (en) Management of internal operation by storage device
EP2645246A1 (en) Method and apparatus of memory management by storage system
JP2000163288A (en) Data storage system, data rearranging method, and recording medium
EP1287425A2 (en) System for and method of accessing blocks on a storage medium
JP2003536329A (en) Method and system for reading a block from a storage medium
WO2006126143A2 (en) Caching for low power consumption grid peer-to-peer devices
KR101288421B1 (en) Method and apparatus for managing memory accesses in av decoder
CN102520879B (en) Priority-based file information storage method, device and system
US20220100401A1 (en) Common storage management device and common storage management method
US20160162216A1 (en) Storage control device and computer system
JP4709857B2 (en) Information processing device
KR101280962B1 (en) Video server and operating method thereof
US8127163B2 (en) Data network and method of controlling thereof
JP2005508114A (en) Acceptance control system for home video server
US11614887B2 (en) Method and system for accelerating storage of data in write-intensive computer applications
US11941448B2 (en) Allocating computing resources to data transfer jobs based on a completed amount and an estimated priority of system failure
WO2007085978A2 (en) A method of controlling a page cache memory in real time stream and best effort applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

NENP Non-entry into the national phase

Ref country code: RU

WWW Wipo information: withdrawn in national office

Country of ref document: RU

122 Ep: pct application non-entry in european phase

Ref document number: 06765684

Country of ref document: EP

Kind code of ref document: A2