GB2469107A - Distribution and reception of plural offset data streams - Google Patents

Distribution and reception of plural offset data streams Download PDF

Info

Publication number
GB2469107A
GB2469107A GB0905721A GB0905721A GB2469107A GB 2469107 A GB2469107 A GB 2469107A GB 0905721 A GB0905721 A GB 0905721A GB 0905721 A GB0905721 A GB 0905721A GB 2469107 A GB2469107 A GB 2469107A
Authority
GB
United Kingdom
Prior art keywords
data
sequence
objects
data objects
data object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0905721A
Other versions
GB0905721D0 (en
GB2469107B (en
Inventor
Chris Houghton
Jiri Fajtl
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SKINKERS Ltd
Original Assignee
SKINKERS Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SKINKERS Ltd filed Critical SKINKERS Ltd
Priority to GB0905721A priority Critical patent/GB2469107B/en
Publication of GB0905721D0 publication Critical patent/GB0905721D0/en
Publication of GB2469107A publication Critical patent/GB2469107A/en
Application granted granted Critical
Publication of GB2469107B publication Critical patent/GB2469107B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17318Direct or substantially direct transmission and handling of requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H20/00Arrangements for broadcast or for distribution combined with broadcast
    • H04H20/20Arrangements for broadcast or distribution of identical information via plural systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2381Adapting the multiplex stream to a specific network, e.g. an Internet Protocol [IP] network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26275Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for distributing content or additional data in a staggered manner, e.g. repeating movies on different channels in a time-staggered manner in a near video on demand system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/64322IP
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17336Handling of requests in head-ends
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17345Control of the passage of the selected programme
    • H04N7/17363Control of the passage of the selected programme at or near the user terminal

Abstract

A method of distributing a stream of data, such as video data, from a stream provider to one or more requesting nodes, comprises receiving a stream of time-sequential data content data from a stream provider (100, Figure 1), and allocating data content from the stream of data to both a first sequence of time-sequential data objects 316 and a second sequence of time-sequential data objects 318, the first and second sequences being allocated substantially the same data content - e.g. alternating data portions making up a complete stream - from the data stream. Timings of data objects in the first and second sequences are such that time boundaries between data objects in the first and second sequences are offset; finally, the method makes data objects in the first and second sequences available for distribution to one or more requesting nodes. Intermediate cache nodes (116, Figure 1) may be provided. Also claimed is a method for requesting and decoding such respective shifted, staggered or offset data streams, with data requests containing unique identifiers identifying first and second data objects.

Description

Method and Apparatus for Distributing Data
Field of the Invention
The present invention relates to a method and apparatus for distributing data.
Background of the Invention
Methods for distributing a live stream of data from a server to a plurality of requesting nodes are known where the stream of data is transmitted separately to each requesting node as a sequence of packets of data via a communications network; however the bandwidth of the server's network interface and the bandwidth available at the server's communications network limit the number of requesting nodes that the server is able to distribute the stream of data to without dropping some packets of data or causing congestion on the communications network that could lead to lost packets of data.
Alternative methods for distributing a live stream of data from a server to a plurality of requesting nodes are known where the stream of data is transmitted to requesting nodes by first transmitting the stream of data to a number of cache nodes that each receive the stream of data and then retransmit it, either to further cache nodes or, eventually, to the plurality of requesting nodes. This allows the burden of transmitting the stream of data to be distributed between the cache nodes and hence allows the limit on the number of requesting nodes that can receive the stream of data to be controlled by the number of cache nodes used to retransmit the stream of data.
Methods for transmitting a live stream of data may typically use dedicated network protocols such as the Real-time Transport Protocol (RTP) and Real Time Streaming Protocol (RTSP) and corresponding server and cache transmission programs in order to provide the stream of data within the time constraints required to ensure that the stream of data is received live' by requesting nodes, and without network congestion or lack of bandwidth causing portions of the stream of data to be lost before reaching each requesting node.
However protocols and transmission programs such as these may be expensive to use and develop, and the servers and cache nodes that use these may also be expensive to use and maintain.
An alternative method for transmitting a stream of data simultaneously to a plurality of requesting nodes is to partition the stream of data into a sequence of data objects that can be stored on the server, and then allow these data objects to be requested from the server using a simple protocol for transferring data objects, such as the HyperText Transfer Protocol (HTTP). A number of cache nodes can intercept HTTP requests intended for the server and provide cached copies of the data objects to allow distribution of the stream of data to a large number of requesting nodes. If a cache node receives a request for a data object that it has not yet cached, the cache node sends a request for the data object to the server, receives a copy of the data object from the server, caches this copy of the data object at the cache node and also distributes the data object in response to the received request for the data object. If a cache node receives several requests within a short space of time for a data object that it has not yet cached, it may send to the server a separate request for the data object for each of the requests it receives until it has cached the data object. Hence although the above technique allows the servers and cache nodes involved in the distribution of the stream of data to use simple, inexpensive protocols to transfer the stream of data to the requesting nodes, the server and cache nodes can become flooded with requests for data objects if many of these requests arrive at the cache nodes within a short space of time, and these requests may be dealt with inefficiently as a result. This becomes particularly problematic if many requesting nodes each wish to consume the same portion of a stream of data, for example if the requesting nodes all wish to consume the most recently created portion of data belonging to a live stream of data.
PCT patent application publication number WO 2005/109224 A2 describes a method for content streaming, where a content server generates from a content file a plurality of high and low quality streams that are divided into portions called streamlets and stored so that they can be accessed by a HyperText Transfer Protocol (HTTP) server. A client module can request streamlets belonging to any of the streams within the plurality of high and low quality streams from the HTTP server, and these streamlets can be cached by cache servers in order to allow a larger number of client modules to access the streamlets efficiently. In order to access the content file a client module requests streamlets belonging to lower or higher quality streams based upon continuous observation of time intervals between successive receive times of each requested streamlet. This allows the client module to continue to receive the content file at a lower quality by receiving a lower quality stream even if adverse network conditions affect the time intervals between successive receive times of each requested streamlet. The problem of many requests being made at the same time for the most recently created streamlets is avoided by each client module using a streamlet cache module to buffer received streamlets before they are consumed by the client module in order to access the content file, hence clients modules do not try to consume the most recently created data all at once, but each consume portions of slightly older data buffered in their streamlet cache module at slightly different times. If all clients were to try and consume the latest streamlets belonging to a stream as the soon as the streamlets are created the server and cache nodes would become flooded with requests for the latest streamlets as described above.
It is an object of the present invention to provide a system for distributing a stream of data to a plurality of requesting nodes via a network of cache nodes using a simple protocol for transferring the stream of data, whilst avoiding the problems described above that are associated with such a technique.
Summary of the Invention
In accordance with a first aspect of the present invention, there is provided a method of distributing a stream of data from a stream provider to one or more requesting nodes, comprising: receiving a stream of data from a stream provider, the stream of data including time-sequential data content, allocating data content from the stream of data to both a first sequence of time-sequential data objects and a second sequence of time-sequential data objects, said first sequence of data objects and said second sequence of data objects being allocated substantially the same data content from said stream of data, and arranging timings of said data objects in the first and second sequences of data objects such that time boundaries between data objects in said first sequence of data objects are offset from time boundaries between data objects of said second sequence of data objects, making data objects in said first sequence of data objects and said second sequence of data objects available for distribution to one or more requesting nodes.
An advantage of the first aspect of the present invention is that a stream of data is made available to requesting nodes as both a first sequence of data objects and a second sequence of data objects where time boundaries between data objects in said first sequence of data objects are offset from time boundaries between data objects of said second sequence of data objects. Since the time boundaries are offset in the two sequences, a requesting node can receive data within a data object from one sequence even whilst a time boundary between data objects is occurring in the other sequence, thus enabling requests for portions of data from data objects to trigger not predominantly or regularly at the start of a data object. If the data objects are created in real time, or near real time, this avoids overload as a result of multiple requests being received within a short space of time.
By periodically alternating between the first sequence of data objects and the second sequence of data objects, a requesting node can nevertheless receive an un-interrupted stream of data.
In accordance with a second aspect of the present invention, there is provided a method of receiving portions of data containing time-sequential data content from a data object provider and decoding a sequence of portions of data, comprising: sending a request to a data object provider for a first portion of data containing at least some data within a first data object, said request comprising a unique identifier identifying the first data object, said first portion of data containing at least some recently created data in the time-sequential data content, decoding the first portion of data received in response to the request for the first portion of data, sending a request to a data object provider for a second portion of data containing at least some data within a second data object, said request comprising a unique identifier identifying the second data object, said second portion of data containing at least some recently created data in the time-sequential data content, decoding the second portion of data received in response to the request for the second portion of data, wherein said first data object is part of a first sequence of time-sequential data objects and said second data object is part of a second sequence of time-sequential data objects, said first sequence of data objects and said second sequence of data objects containing substantially the same data content, and wherein timings of said data objects in the first and second sequences of data objects are arranged such that time boundaries between data objects in said first sequence of data objects are offset from time boundaries between data objects of said second sequence of data objects.
Again, according to this aspect of the invention, time sequential data content can be received from a data object provider without the portions of data requested from the data object provider predominantly or regularly beginning at the start of a data object.
Further features and advantages of the invention will become apparent from the following description of preferred embodiments of the invention, given by way of example only, which is made with reference to the accompanying drawings.
Brief Description of the Drawings
Figure 1 schematically illustrates the principle components and communication links of a system for distributing a stream of data to one or more requesting nodes via a network of cache nodes according to different embodiments of the present invention.
Figure 2 shows schematically a requesting node 120 and the components of which it is comprised according to different embodiments of the present invention.
Figure 3 illustrates an example of a sequence of selected portions of data 320 that are selected from a first sequence of data objects 316 and a second sequence of objects 318 by a requesting node.
Figure 4 schematically illustrates an embodiment of the invention where the network of cache nodes 114 distributes data objects received from the data object distributor 112 of a server 102 to a number of requesting nodes 120, 122 by storing copies of data objects in a data object cache 402 in each cache node.
Figure 5 illustrates the steps carried out at a server, cache node containing a data object cache, requesting node A and requesting node B in order to distribute a first data object from the server 102 to requesting nodes A and B via a cache node according to a particular embodiment of this invention.
Figure 6 schematically illustrates an embodiment of the invention where the network of cache nodes 114 distributes data objects received from the data object distributor 112 of a server 102 to a number of requesting nodes 120, 122 by forwarding data objects received from the data object distributor 112 of the server 102 to the requesting nodes 120, 122 without using a data object cache at each cache node.
Figure 7 illustrates the steps carried out at a server, cache node with no data object cache, requesting node A and requesting node B in order to distribute a first data object from the server 102 to requesting nodes A and B via a cache node according to a particular embodiment of this invention.
Detailed Description of the Invention
A detailed description of exemplary embodiments of the invention follows with reference to the figures provided.
Figure 1 schematically illustrates the principle components and communication links of a system for distributing a stream of data to a plurality of requesting nodes via a network of cache nodes according to different embodiments of the present invention. A server 102 consists of a microprocessor 128 that processes instructions stored in a random access memory (RAM) 126 that implement a stream receiver 104, first sequence producer 106, second sequence producer 108, data object store 110, and data object distributor 112. The server also consists of a network interface 130 such as a network card or a broadband modem that allows programs running on the microprocessor 128 to transmit and receive data via a communications network (not shown) such as the Internet, and a non-volatile storage device 132 such as a hard drive that can be accessed by programs running on the microprocessor 128.
A stream provider 100 such as a computer providing a live video stream sends data to the stream receiver 104 at the server 102 via a communications network. The first sequence producer 106 and the second sequence producer 108 process data received by the stream receiver 104 and each store processed data in the data object store 110 which may either store the data objects in the RAM 126 or may store them in a non-volatile memory device 132 such as a hard disk drive. A data object distributor 112 accesses data stored in the data object store and distributes it from the server 102 to a network of cache nodes 114 via a communications network. The data object distributor 112 may be for example a file server such as a HyperText Transfer Protocol (HTTP) server which transmits a data object stored as a file in the data object store 110 via a communications network in response to a HTTP GET' command requesting that file.
The network of cache nodes 114 consists of a number of cache nodes 116, 118 connected either to each other or to the server 102 via a communications network. Requesting nodes 120, 122 connect to a DNS server 124 to request that a Universal Resource Identifier (URI) that points to the server 102 be resolved to a pointer to the server, so that a request for data can then be sent to the server using this pointer. The DNS server 124 can resolve the URI to a pointer to cache nodes 116, 118 within the network of cache nodes so that requests for data sent by the requesting nodes 120, 122 are sent to the cache nodes 116, 118. The cache nodes 116, 118 are then able to receive requests for data from the requesting nodes 120, 122, process these to provide a response to the requesting nodes, and, if necessary, send requests for data to the server and receive responses to these requests.
Figure 2 shows schematically a requesting node 120 and the components of which it is comprised according to different embodiments of the present invention. The requesting node consists of a microprocessor 206 that processes instructions stored in a random access memory (RAM) 208 that implement a data portion receiver 200, steam assembler 202 and stream consumer 204. The requesting node also consists of a video output component 210 that is able to render graphics produced by programs running on the microprocessor and output these to a video display 216 for viewing by the user controlling the requesting node. Programs running on the microprocessor 206 can process user input received by means 212 for accepting user input from a user input device (not shown) such as a mouse or computer keyboard. A network interface 214 such as a network card or a broadband modem is provided that allows programs running on the microprocessor 200 to transmit and receive data via a communications network 218 such as the Internet.
The system as described above is designed in such a way as to provide from the stream provider a live stream of data simultaneously to a number of requesting nodes without flooding the network of cache nodes 114 or the server 102 with requests for the newest data in the stream of data. In order to achieve this, the system proceeds as described below.
The stream provider 100 provides a stream of data to the server 102, for example by providing the stream of data in the form of a sequence of data packets over an Internet Protocol (IP) network. The stream of data may typically contain time-sequential data content, such as for example video data or audio data, and the stream of data may either be substantially live, for example it may contain data relating to a live video stream or television broadcast, or it may contain data that is not live but has been recorded prior to transmission of the stream of data.
The stream of data is received at the server 102 by the stream receiver 104 and is then processed by the first sequence producer 106 and the second sequence producer 108. The first sequence producer 106 allocates a sequence of quantities of data from the stream of data to a first sequence of data objects, for example the first sequence producer may allocate a first allocated quantity of data from the stream of data to a first data object in the first sequence of data objects, then the first sequence producer may allocate a second quantity of data from the stream of data to a second data object in the first sequence of data objects, and so on.
The second sequence producer 108 allocates substantially the same data from the stream of data to a second sequence of data objects in a similar fashion to the first sequence producer 106. However, the timings of the data objects in the first sequence of data and the second sequence of data are arranged such that time boundaries between data objects in the first sequence of data objects are offset from time boundaries between data objects of the second sequence of data objects.
As an example, the first sequence producer may allocate a first allocated quantity of data from the stream of data to a first data object in the first sequence of data objects. The second sequence producer may allocate a second allocated quantity of data from the stream of data to a first data object in the second sequence of data objects where the data allocated to said data object would contain at least some of the data allocated to the first data object in the first sequence of data as well as a first further portion of data. The first sequence producer may then allocate a second allocated quantity of data from the stream of data to a second data object in the first sequence of data objects, where the data allocated to said second data object would contain at least some of the first further portion of data allocated to the first data object in the second sequence of data and a second further portion of data, and so on.
Preferably the above method is applied so that the two sequence producers 106, 108 produce two copies of the stream of data, the first copy being the first sequence of data objects and the second copy being the second sequence of data objects, where time boundaries between data objects in said first sequence of data objects are offset from time boundaries between data objects of said second sequence of data objects.
It is preferred that all data objects in both the first sequence of data objects and the second sequence of data objects are of a uniform size. When this is the case, no computation is required by the server 102 when creating each data object in order to ensure that time boundaries between data objects in the first sequence of data objects remain offset from time boundaries between data objects in the second sequence of data objects. In addition it is preferred that the boundaries between data objects in the first sequence of data objects and the boundaries between data objects in the second sequence of data objects are offset by half of the uniform size of data objects, such that the first half of the data in each data object in the first sequence of data objects is the same as the data in the second half of a data object in the second sequence of data objects, and the second half of the data in each data object in the first sequence of data objects is the same as the data in the first half of a data object in the second sequence of data objects, and vice versa for the first and second halves of data in each data object in the second sequence of data objects.
Each data object from both the first and second sequences of data objects is assigned a unique identifier and stored in the data object store 110. Preferably the data object's unique identifier is determined according to which sequence of data objects it belongs to and its location within that sequence, for example the first data object in the first sequence of data objects may be given the unique identifier "Al", the third data object in the second sequence of data objects "B3" and so on. A data object's unique identifier can be used to locate it and retrieve it from the data object store 110, for example if the data object store is a non-volatile memory device 132 such as a hard drive the data object can be assigned a unique file name based on its unique identifier that can be used to identify the data object when retrieving it from the hard drive.
In one embodiment of the invention where the stream of data provided by the stream provider 100 consists of data that is live, the unique identifier that is assigned to each data object in both the first and second sequences of data objects is determined according to which sequence of data objects each data object belongs to and the time when each data object was created. For example, each data object may be assigned a unique identifier according to which sequence of data objects each data object belongs to and the time when each data object was created rounded down to the nearest ten seconds.
The data object distributor 112 accesses the data objects held in the data object store 110 and distributes them to a number of requesting nodes 120, 122 via the network of cache nodes 114. For example, if the data object distributor 112 is an HTTP server then it will provide a data object stored as a file in the data object store 110 in response to an HTTP GET' request for that file. If this HTTP server receives a request for a portion of a data object, for example in the form of a range header in an HTTP GET' request that requests a particular byte range within a file, the requested portion of that data object stored as a file in the data object store 110 is retrieved and distributed by the data object distributor. If the data object distributor 112 receives a request for all of or a portion of a data object that is still being allocated data by either the first sequence producer 106 or the second sequence producer 108, the data object distributor can provide the requested data object or portion of that data object as data is allocated to it.
The cache nodes 116, 118 are able to intercept requests for data made to the server 102 by requesting nodes 120, 122, for example HTTP requests such as GET' commands from requesting nodes to the server that request data objects stored as files on the server, For example, when a requesting node 120 makes an HTTP request for the file pointed to by the Uniform Resource Identifier (URI) "http:\\www.test.com\filel.dat", the requesting node 120 first requests a Domain Name System (DNS) server 124 to resolve "www.test.com" to an Internet Protocol (IP) address that points to the server 102, and the requesting node then sends the HTTP request to the computer at this IP address.
Instead of pointing the requesting node to the server 102, the DNS server 124 can point it to a cache node 116 in the network of cache nodes 114 that catches the HTTP request destined for the server 102 and distributes a local copy of the requested file rather than forwarding the request to the server, provided that a cached copy of the requested file is available at the cache node. If no cached copy of the requested file is available at the cache node, the cache node will forward the HTTP request to either another cache node or the server 102 (according to the cache node's arrangement within the network of cache nodes 114) and will cache the requested file when it is received in the corresponding response.
The network of cache nodes 114 is able to reduce the number of requests that reach the server 102 and allows data to be distributed efficiently from the server to a large number of requesting nodes. In one embodiment of the invention the network of cache nodes 114 may be a Content Distribution Network (CDN) provided and maintained by a third party, such as Akamai EdgeSuiteTM or or Limelight Networks Lime1ightDELIVERTM, which provides the functionality outlined above. As data objects are assigned unique identifiers which are determined according to their location within a sequence of data objects it is possible for a requesting node 120, 122 to determine the unique identifier of a data object that it requires according to which section of the stream of data the node wishes to access, and form an appropriate request, for example an HTTP request containing the data object's file name which is based on the data object's unique identifier, to acquire that data object from the server 102 via the network of cache nodes 114.
As a result a requesting node 120 or 122 can use the first sequence of data objects and the second sequence of objects created by the server 102 and distributed by the network of cache nodes 114 to reconstruct a desired section of the stream of data from the stream provider 100. When a requesting node 120 wishes to obtain a desired section of the stream of data from the stream provider the data portion receiver 200 at the requesting node proceeds by requesting a first portion of data contained in a first data object from the server 102 and receiving said first portion of data, and then requesting a second portion of data contained in a second data object from the server 102 and receiving said second portion of data, and so on. Hence a sequence of portions of data is requested from a sequence of requested data objects. Preferably every sequential pair of portions of data in the requested sequence of portions of data consists of one portion of data requested from a data object belonging to the first sequence of data objects and one portion of data requested from a data object belonging to the second sequence of data objects. In this way a first sequence of portions of data are requested from a plurality of data objects belonging to the first sequence of data objects, and a second sequence of portions of data are requested from a plurality of data objects belonging to the second sequence of data objects, where the requests for said first sequence of portions of data are alternated with the requests for said second sequence of portions of data in order to receive an alternating sequence of portions of data, said alternating sequence of portions of data forming part of the stream of data provided by the stream provider 100.
Additionally it is preferred that the size of every portion of data requested by a requesting node is equal to half of the uniform size of all data objects.
Figure 3 illustrates an example of a sequence of requested portions of data 320 requested by a requesting node from a first sequence of data objects 316 and a second sequence of objects 318 distributed by the server 102. The first portion of data 308 in the sequence of requested portions of data 320 is selected from a first data object 300 in the first sequence of data objects 316; the second portion of data 310 in the sequence of requested portions of data 320 is selected from a second data object 304 in the second sequence of data objects 318, and 50 on. As is preferred, each data object 300, 302, 304, 306 in both the first and second sequences of data objects 316, 318 is of a uniform size 322, and the size 324 of each selected portion of data 308, 310, 312, 314 is equal to half the size of the uniform size of data objects 322.
The stream assembler 202 at a requesting node 120 assembles the sequence of requested portions of data requested and received by the data portion receiver 200 in order to reconstruct the stream of data from the stream provider 100. Preferably the sequence of requested portions of data assembled by the stream assembler 202 is an alternating sequence of portions of data forming part of the stream of data provided by the stream provider 100, as described above. A stream consumer 204 can then process the received stream of data, for example if the stream of data contains video data the stream consumer can decode this video data and render it for display on a video display 216 such as computer monitor via the video output device 210 which could be for example a computer graphics card.
Data objects from the first sequence of data objects and the second sequence of data objects produced by the first sequence producer 106 and second sequence producer 108, respectively, are stored in the data object store at the server 102 for a limited period of time after they have been created.
This allows requesting nodes to request to receive a section of the stream of data that was created at some time in the past.
However, although a user controlling a requesting node 120, 122 can direct the node to request and consume a section of the stream of data created in the past, if the stream of data is live' users are most likely to be interested in accessing the newest data as it is generated by the stream provider 100. In order for a user who directs a requesting node 120, 122 to access the live stream of data to start requesting the newest data objects in both the first and second sequences of data objects, the requesting node can first send a query to the server 102 which requests the unique identifiers of the newest data objects in both sequences of data objects, and the server can then provide these unique identifiers in a response to the requesting node.
In one embodiment of the invention the unique identifiers of the newest data objects in both the first and second sequences of data objects can be stored in a regularly updated file on the server 102, for example in an eXtensible Markup Language (XIVIL) file held in the server's data object store 110. In this embodiment requesting nodes 120, 122 can request the file containing the unique identifiers of the newest data objects in both the first and second sequences of data objects by making an HTTP request to the server 102 which distributes the file in response to the request. This HTTP request may in one embodiment of the invention be transmitted to the server 102 via the network of cache nodes 114 which can then cache a recently created copy of the XML file for distribution to requesting nodes 120, 122 that is received in a response from the server.
In an alternative embodiment of the invention where the unique identifier that is assigned to each data object in both the first and second sequences of data objects is determined according to which sequence of data objects each data object belongs to and the time when each data object was created, a requesting node 120, 122 can determine the unique identifiers of the newest data objects in both the first and second sequences of data objects according to the current time, without first contacting the server 102. A requesting node 120 could receive data relating to the current time from a time server via a communications network, for example by receiving the current time from the time server via the Network Time Protocol (NTP), or the requesting node 120 could comprise a clock which can be accessed by programs running on the microprocessor 206 in order to allow these to check the current time.
In an alternative embodiment of the invention where the unique identifier that is assigned to each data object in both the first and second sequences of data objects is determined according to which sequence of data objects each data object belongs to and the time when each data object was created, a requesting node 120, 122 can determine the unique identifiers of the data objects in both the first and second sequences of data objects at the start of a desired section of the stream of data that was created in the past according to the time when the start of the desired section of the stream of data was created.
Once a requesting node 120, 122 has consumed the newest data object from one of the sequences of data objects it can then calculate the unique identifier of the next data object it requires according to the unique identifier of the data object it has just consumed, and can proceed in this way in order to continue accessing the newest data in the stream of data.
If a number of requesting nodes 120, 122 are each consuming a sequence of data objects relating to the same portion of a stream of data with the data objects being consumed at the same rate by each requesting node, for example if a number of requesting nodes are consuming the newest data in a live stream of data, then every requesting node may finish consuming a first data object and subsequently request a second data object as soon as it becomes available in order to continue receiving the stream of data. Requests from the requesting nodes 120, 122 for data objects that are not currently cached in the network of cache nodes 114, for example requests for new or recently created data objects, cause the network of cache nodes to make requests to the server 102 in order to retrieve copies of these data objects so that they can be distributed to the requesting nodes.
If within a short space of time a number of requesting nodes 120, 122 all finish consuming a first data object and then request a second data object that is not stored in the network of cache nodes 114, the cache nodes will not have sufficient time to begin caching the second data object so that it can be distributed efficiently to each requesting node. Instead, a cache node 116 will transmit a request to the server 102 or another cache node (according to the cache node's arrangement within the network of cache nodes) for each request for the second data object that it receives until it has started caching the second data object. As a result many near-simultaneous requests for a data object that is not yet cached in the network of cache nodes 114 can flood the network of cache nodes and server 102 with requests for that data object, and the same data object may then be transmitted more than once to a number cache nodes as a result, reducing the efficiency of the server and network of cache nodes and placing them under unnecessary strain.
The system as described is able to overcome the above situation through the use of the two sequences of data objects that are distributed from the server 102. As a first sequence of data objects 316 and a second sequence of data objects 318 are distributed from the server 102, where both sequences contain substantially the same data, a requesting node 120, 122 can switch between downloading a first portion of a data object in the first sequence of data objects and a second portion of a data object in the second sequence of data objects at any time, and can still assemble a section of the stream of data from said first and second portions of data.
Additionally as time boundaries between data objects in the first sequence of data objects 316 and between data objects in the second sequence of data objects 318 are offset, a requesting node 120, 122 can receive data within a data object from one sequence of data objects even whilst a time boundary between data objects is occurring in the other sequence of data objects, thus enabling requests for portions of data from data objects to trigger not predominantly or regularly at the start of a data object.
For example a requesting node 120, 122 currently receiving data from a first data object 300 belonging to the first sequence of data objects can switch to receiving data from a second more recently created data object 304 in the second sequence of data at any point over the period of time between when the second data object 304 is created and when the end of the first data object 300 is reached. Thus every requesting node 120, 122 is able to choose at what point within a certain time period to begin receiving the next data object in order to continue consuming the stream of data.
In order to minimise the number of requesting nodes 120, 122 making a number of requests for a data object that is not currently cached in the network of cache nodes 114 within a very short space of time, the point at which a switch is made from receiving data from a first data object in the first sequence of data objects to receiving data from a second data object in a second sequence of objects (and vice versa) can be controlled for each requesting node 120, 122 50 that the network of cache nodes 114 is able to cache the second data object from the server 102 before receiving a large number of requests for it.
Preferably every requesting node 120, 122 that is consuming a live stream of data requests a portion of data that begins after a first quantity of data at the start of a the next new data object from which it is requested. By controlling the size of the first quantity of data each requesting node can control when it begins receiving data from a new data object in order to allow the network of cache nodes 114 to successfully cache the new data object before distributing it to the number of requesting nodes.
For example, a first requesting node could be controlled so that almost as soon as a new data object in either the first sequence of data objects or the second sequence of data objects becomes available at the server 102, the first requesting node makes a request for a portion of data at the start of the new data object. The new data object is then cached by a first cache node in the network of cache nodes 114 and the first requesting node can begin receiving it. Other requesting nodes can be controlled so that they request a slightly later portion of the new data object after the first requesting node has made its request, and so will either receive that portion from the first cache node or cause other cache nodes to cache the new data object before distribution to a number of requesting nodes. Gradually as more and more requesting nodes request the new data object more and more cache nodes will have cached the new data object and can distribute it to these other requesting nodes without needing to first request it from another cache node or the server 102.
Preferably each requesting node 120, 122 controls the size of the first quantity of data by setting it to a value between zero and half of the uniform size of data objects 324 according to a probability distribution, where the probability distribution gives a low probability of selecting a low value for the size of the first quantity of data and a high probability of selecting a high for the size of the first quantity of data. Thus when a number of requesting nodes 120, 122 are requesting the newest data in the stream of data a small number of them will make requests for data from the newest data objects in the first and second sequences of data as soon as they are created, forcing the network of cache nodes 114 to obtain copies of these data objects, but most will wait until the newest data objects are about to be succeeded by newer data objects, at which point the network of cache nodes should already have cached copies of the data objects and so can service requests for these portions of the data objects without having to forward requests to the server.
An example of such a probability distribution is shown in Figure 3a, where x is the value selected for the size of the first quantity of data, n is the uniform size of data objects 322, P is a probability less than 1, and the probability of selecting a given value for the size of the first quantity of data increases as said value increases exponentially up to a probability of P when said value is equal to half of the uniform size of data objects 324.
In another embodiment of the invention each requesting node 120, 122 receives at regular intervals from the server 102 a first size of data value that the requesting node uses to determine the size of the first quantity of data. The server 102 determines the first size of data value for each requesting node 120, 122 that wishes to consume the stream of data and ensures that a small number of requesting nodes have a small first size of data value whilst most requesting nodes have a high first size of data value.
In another embodiment of the invention where the sizes of each of the data objects in both the first and second sequences of data objects can be different, each requesting node requests a first portion of data from a first data object that begins after a first quantity of data at the start of the first data object, and then requests a second portion of data from a second data object that begins after a second quantity of data at the start of the second data object, and so on.
By controlling the size of the first and second quantities of data each requesting node that is following a stream of live data can control when it begins receiving data from each new data object in order to allow the cache nodes to successfully cache the new data object.
Another advantage of the present invention is its independence from the construction and implementation of the network of cache nodes 114 as explained below and in corresponding Figures 4 to 7.
Figure 4 schematically illustrates an embodiment of the invention where the network of cache nodes 114 distributes data objects received from the data object distributor 112 of a server 102 to a number of requesting nodes 120, 122 by storing copies of data objects in a data object cache 402 in each cache node.
In this embodiment of the invention a cache node 116 consists of a microprocessor 408 that processes instructions stored in a random access memory (RAM) 406 that implement a data object receiver 400, data object cache 402 and data object distributor 404. The cache node 116 also consists of a network interface 410 such as a network card or a broadband modem that allows programs running on the microprocessor 408 to transmit and receive data via a communications network (not shown) such as the Internet, and a non-volatile storage device 412 such as a hard drive that can be accessed by programs running on the microprocessor 408. Data objects are received by the data object receiver 400 at a cache node 116 from either other cache nodes or from the server 102, according to the cache node's arrangement within the network of cache nodes. When the data object receiver 400 begins receiving a data object that is not stored in the data object cache 402, the data object is stored in the data object cache as it is received, where the data object cache 402 may either store data objects in the RAM 406 or may store them in a non-volatile memory device 412 such as a hard disk drive.
A data object distributor 404 distributes data objects from a cache node 116 in response to requests for those data objects from either requesting nodes or other cache nodes 118. When distributing a data object the data object distributor 404 first checks if there is a copy of the data object stored in the data object cache 402, and if so said local copy is distributed to requesting nodes 120 or other cache nodes 118 as required. If there is no copy of the data object stored in the data object cache 402 the data object distributor checks if the data object receiver 400 is in the process of receiving the data object from the server 102 or another cache node, and if so distributes the data object to requesting nodes 120 or other cache nodes 118 as the data object is received. If the data object is not in the process of being received from the server 102 or from other cache nodes then the data object receiver 400 requests the data object to be distributed from the server 102 or other cache nodes, according to the cache node's 116 arrangement within the network of cache nodes 114, so that the data object may then be distributed as necessary. If several requests arrive at the data object distributor 404 within a short space of time for portions of data from a data object that has not yet begun to be received by the data object receiver 400, the data object receiver will make a new request for the data object from the server or another cache node (according to the cache node's arrangement within the network of cache nodes) for each of these requests.
If the data object distributor 404 receives a request for a portion of a data object, that portion is distributed from the data object cache 402 if a copy of the data object is present there. If the data object is in the process of being received the data object receiver 400 can forward the requested portion of the data object whilst or after that portion is received. If the data object is not in the process of being received by the data object receiver 400 the entire data object is requested from either the server 102 or other cache nodes by the data object receiver 400, according to the cache node's arrangement within the network of cache nodes.
Figure 5 illustrates the steps carried out at a server, cache node containing a data object cache, requesting node A and requesting node B in order to distribute a first data object from the server 102 to requesting nodes A and B via a cache node according to a particular embodiment of this invention.
Initially requesting node A requests a first portion of a first data object from the cache node (step 500). At the cache node the data object distributor 404 receives the request from requesting node A and checks the data object cache 402 to determine if the first data object is stored there. The data object cache 402 does not currently contain the first data object so the data object distributor 404 checks if the first data object is currently being received at the data object receiver 400. The data object receiver 400 is not currently receiving the first data object so it requests the entirety of the first data object from the server (step 502). The server then begins to transmit the first data object to the cache node (step 504). The cache node begins receiving the first data object, starts storing it in its data object cache 402, and starts transmitting the first portion of the first data object requesting node A (step 506).
Requesting node B requests a second portion of the first data object from the cache node (step 508). At the cache node the data object distributor 404 receives the request from requesting node B and checks the data object cache 402 to determine if the first data object is stored there. The data object cache 402 has started to obtain a copy of the first data object as it is being received by the data object receiver 400, so the data object distributor 404 begins transmitting the second portion of the first data object to requesting node B (step 510). When the first portion of the first data object requested by requesting node A has been transmitted, the data object distributor 404 stops transmitting to requesting node A (step 512). When the second portion of the first data object requested by requesting node B has been transmitted, the data object distributor 404 stops transmitting to requesting node B (step 514). When all of the first data object has been transmitted to the cache node, the server stops transmitting to the cache node (step 516).
Figure 6 schematically illustrates an embodiment of the invention where the network of cache nodes 114 distributes data objects received from the data object distributor 112 of a server 102 to a number of requesting nodes 120, 122 by forwarding data objects received from the data object distributor 112 of the server 102 to the requesting nodes 120, 122 without using a data object cache at each cache node. In this embodiment of the invention a cache node 116 consists of a microprocessor 606 that processes instructions stored in a random access memory (RAM) 604 that implement a data object receiver 600 and a data object distributor 602. The cache node also consists of a network interface 608 such as a network card or a broadband modem that allows programs running on the microprocessor 606 to transmit and receive data via a communications network (not shown) such as the Internet. Data objects are received by a data object receiver 600 at a cache node 200 from either other cache nodes or from the server 102, according to the cache node's arrangement within the network of cache nodes. During receipt of a data object, a data object distributor 602 is able to forward the received data as it is received to requesting nodes 120 or other cache nodes 118. Once a cache node has completed receipt of a data object that data object cannot be forwarded to any requesting nodes 120 or other cache nodes 118 unless the data object starts to be received from the server 102 or another cache node again. If several requests arrive at the data object distributor 602 within a short space of time for portions of data from a data object that has not yet begun to be received by the data object receiver 600, the data object receiver will make a new request for the data object from the server or another cache node (according to the cache node's arrangement within the network of cache nodes) for each of these requests.
If the data object distributor 602 receives a request for a portion of a data object, the data object distributor will first check whether any portion of that data object is currently being received by the data object receiver 600. If the requested data object is not currently being received, the data object receiver 600 will request the required portion of the data object from either the server 102 or another cache node, according to the cache node's arrangement within the network of cache nodes. If a portion located before the required portion within the data object is currently being received, then the data object distributor 602 will extend the current request for the earlier portion of the requested data object so that it will also receive the required portion of the data object in the same transmission. For example, where the data object distributor 112 at the server 102 is an HTTP server that can receive an HTTP request with a first range header that requests a first portion of data from a data object stored as a file in the server's data object store 110, the server can receive a second range header during the transmission of the first portion of data that determines a second portion of data that should be included in the transmission. The data object distributor 602 at the cache node begins forwarding the required portion of the data object once the required portion of the data object begins to be received as part of the extended request for a portion of the data object.
Figure 7 illustrates the steps carried out at a server, cache node with no data object cache, requesting node A and requesting node B in order to distribute a first data object from the server 102 to requesting nodes A and B via a cache node according to a particular embodiment of this invention. Initially requesting node A requests a first portion of a first data object from the cache node (step 700). At the cache node the data object distributor 602 receives the request from requesting node A and checks the data object receiver 600 to determine if the first data object is currently being received. The data object receiver 600 is not currently receiving the first data object so it requests the first portion of the first data object from the server (step 702). The server then begins to transmit the first portion of the first data object to the cache node (step 704).
The cache node begins receiving the first portion of the first data object and starts transmitting the first portion of the first data object to requesting node A (step 706).
Requesting node B requests a second portion of the first data object from the cache node (step 708). At the cache node the data object distributor 602 receives the request from requesting node B and checks the data object receiver 600 to determine if the first data object is currently being received. The data object receiver 600 is currently receiving the first portion of the first data object 402. The first portion of the first data object contains at least some of the second portion, which has not yet begun to be received by the cache node, so the cache node extends its existing request for the first data object to include all of the second portion of the first data object (step 710). When the cache node begins receiving the second portion of the first data object it begins transmitting said portion to requesting node B (step 712). When the server has transmitted all of the first portion of the first data object to the cache node it continues transmitting the remainder of the second portion of the first data object to the cache node (step 714). When the cache node has transmitted all of the first portion of data to requesting node A, the data object distributor 602 stops transmitting data to requesting node A (step 716). When the server has transmitted all of the second portion of data to the cache node, the server stops transmitting to the cache node (step 718). When the cache node has transmitted all of the second portion of data to requesting node B, the data object distributor 602 stops transmitting data to requesting node B (step 720).
The above embodiments are to be understood as illustrative examples of the invention. Further embodiments of the invention are envisaged as follows.
Embodiments of the invention are envisaged where the server 102 produces from the stream of data a plurality of sequences of data objects, where each of the sequences of data objects within the plurality of sequences of data objects contain substantially the same data, and time boundaries between data objects within each of the sequences of data objects do not overlap with the time boundaries between the data objects within any of the other sequences of data objects. Requesting nodes 120, 122 can reconstruct the stream of data by receiving from the server 102 via the network of cache nodes 114 a sequence of portions of data from a sequence of data objects, each of the data objects with the sequence of data objects belonging to any of the sequences of data objects within the plurality of sequences of data objects.
Embodiments of the invention are envisaged where the rate at which the stream producer 100 produces the stream of data changes over time, but the data objects produced at the server 102 by the first sequence producer 106 and the second sequence producer 108 each contain the same quantity of data and are produced at a uniform rate. For example data objects from the first sequence of data objects and the second sequence of data objects produced by the first sequence producer 106 and the second sequence producer 108, respectively, can be produced such that each data object can contain more data than would ever be allocated to the stream of data by the stream producer 100 during the time between creation of that data object and the next data object in the sequence of data objects to which the data object belongs. When data is allocated to a data object by either the first sequence producer 106 or the second sequence producer 108, extra data that carries no information can be appended to the data allocated to the data object from the stream of data in order to ensure that the data object contains the correct quantity of data.
Embodiments of the invention are envisaged where the stream of data produced by the stream producer 100 contains multiplexed data relating to a plurality of sub-streams of data. Requesting nodes can receive data from a selected number of sub-streams within the plurality of sub-streams of data multiplexed within the stream of data by requesting from the server 102 portions of data that contain the multiplexed data relating to the selected number of the sub-streams from data objects belonging to the first and second sequences of data objects, whilst not requesting portions of data that contain multiplexed data relating to other sub-streams that are not required.
It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments.
Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.
It will be appreciated that each of the connections made via a communications network described above involving the stream provider 100, server 102, network of cache nodes 114 consisting of a number of cache nodes 116, 118, and requesting nodes 120, 122, could be made via separate communications networks, with each communications network being of a different type.

Claims (22)

  1. Claims 1. A method of distributing a stream of data from a stream provider to one or more requesting nodes, comprising: receiving a stream of data from a stream provider, the stream of data including time-sequential data content, allocating data content from the stream of data to both a first sequence of time-sequential data objects and a second sequence of time-sequential data objects, said first sequence of data objects and said second sequence of data objects being allocated substantially the same data content from said stream of data, and arranging timings of said data objects in the first and second sequences of data objects such that time boundaries between data objects in said first sequence of data objects are offset from time boundaries between data objects of said second sequence of data objects, making data objects in said first sequence of data objects and said second sequence of data objects available for distribution to one or more requesting nodes.
  2. 2. A method according to claim 1, wherein said step of making data objects in said first sequence of data objects and said second sequence of data objects available for distribution to one or more requesting nodes is achieved by distributing said data objects to requesting nodes via one or more cache nodes.
  3. 3. A method according to claim 1 or 2, wherein each data object in said first sequence of data objects is assigned a unique identifier and each data object in said second sequence of data objects is assigned a unique identifier, and said step of making data objects in said first sequence of data objects and said second sequence of data objects available for distribution to one or more requesting nodes is achieved by providing a particular data object in response to a request for that data object, said request identifying the required data object by way of its unique identifier.
  4. 4. A method according to claim 3 where the unique identifier of the most recently created data object in the first sequence of data objects and the unique identifier of the most recently created data object in the second sequence of data objects are transmitted to a requesting node in response to a request for the unique identifiers of the most recently created data objects.
  5. 5. A method according to claim 3, wherein said unique identifier assigned to each data object in said first sequence of data objects is based on the position of each data object within said first sequence of data objects, and said unique identifier assigned to each data object in said second sequence of data objects is based on the position of each data object within said second sequence of data objects.
  6. 6. A method according to any preceding claim, wherein the data objects in said first sequence of data objects contain the same quantity of data as the data objects in said second sequence of data objects.
  7. 7. A method according to any preceding claim, wherein the timings of said time boundaries between data objects in said first sequence of data objects are arranged such that said boundaries are located substantially half-way between adjacent time boundaries between data objects in said second sequence of data objects.
  8. 8. A method according to any preceding claim, wherein said stream of data contains video data.
  9. 9. A method according to any preceding claim, wherein said data contained in said stream of data is substantially live.
  10. 10. A method of receiving portions of data containing time-sequential data content from a data object provider and decoding a sequence of portions of data, comprising: sending a request to a data object provider for a first portion of data containing at least some data within a first data object, said request comprising a unique identifier identifying the first data object, said first portion of data containing at least some recently created data in the time-sequential data content, decoding the first portion of data received in response to the request for the first portion of data, sending a request to a data object provider for a second portion of data containing at least some data within a second data object, said request comprising a unique identifier identifying the second data object, said second portion of data containing at least some recently created data in the time-sequential data content, decoding the second portion of data received in response to the request for the second portion of data, wherein said first data object is part of a first sequence of time-sequential data objects and said second data object is part of a second sequence of time-sequential data objects, said first sequence of data objects and said second sequence of data objects containing substantially the same data content, and wherein timings of said data objects in the first and second sequences of data objects are arranged such that time boundaries between data objects in said first sequence of data objects are offset from time boundaries between data objects of said second sequence of data objects.
  11. 11. A method according to claim 10 where a first sequence of portions of data are requested from a plurality of data objects belonging to the first sequence of data objects, and a second sequence of portions of data are requested from a plurality of data objects belonging to the second sequence of data objects, and where the requests for said first sequence of portions of data are periodically alternated with the requests for said second sequence of portions of data in order to receive an alternating sequence of portions of data that forms part of a stream of data.
  12. 12. A method according to claim 10 or 11 where the first data object is the most recently created data object in the first sequence of time-sequential data objects, and where the second data object is the most recently created data object in the second sequence of time-sequential data objects.
  13. 13. A method according to claim 12 where the unique identifiers of the first and second data objects are received from a server in response to a request for the unique identifiers of the newest data objects, the unique identifiers of subsequent data objects in the first sequence of time-sequential data objects are determined according to their location in said first sequence in relation to the location of the first data object within the first sequence and according to the unique identifier of the first data object, and the unique identifiers of subsequent data objects in the second sequence of time-sequential data objects are determined according to their location in said second sequence in relation to the location of the second data object within the second sequence and according to the unique identifier of the second data object.
  14. 14. A method according to claim 12 where the unique identifiers of the first data object and subsequent data objects in the first sequence of time-sequential data objects are determined according to the current time, and where the unique identifiers of the second data object and subsequent data objects in the second sequence of time-sequential data objects are determined according to the current time.
  15. 15. A method according to any of claims 10 to 14, wherein said data object provider is a cache node.
  16. 16. A method according to any of claims 10 to 15, wherein said at least some data contained in a first data object begins after a first quantity of data in said first data object, and said at least some data contained in a second data object begins after a second quantity of data in said first data object.
  17. 17. A method according to claim 16, wherein the size of the first quantity of data is selected according to a probability distribution, and the size of the second quantity of data is selected according to a probability distribution.
  18. 18. A method according to claim 16, wherein the size of the first quantity of data is selected by receiving a first size of data value from a server and setting the size of the second quantity of data according to said first size of data value, and the size of the second quantity of data is selected by receiving a second size of data value from a server and setting the size of the second quantity of data according to said second size of data value.
  19. 19. A method according to claim 17 or 18, wherein the first quantity of data contains the same amount of data as contained in the second quantity of data.
  20. 20. A method according to claim 19, wherein the first quantity of data contains an amount of data less than or equal to half of the amount of data in the first data object, and the second quantity of data contains an amount of data less than or equal to half of the amount of data in the second data object.
  21. 21. A method according to any of claims 10 to 20, wherein said first portion of recently created data and said second portion of recently created data contain the same quantity of data.
  22. 22. A method according to any of claims 10 to 21, wherein said step of decoding data received in response to the request for at least some data from the first data object and said step of decoding data received in response to the request for at least some data from the second data object are performed in order to reconstruct a portion of the time-sequential data content.
GB0905721A 2009-04-02 2009-04-02 Method and apparatus for distributing data Expired - Fee Related GB2469107B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0905721A GB2469107B (en) 2009-04-02 2009-04-02 Method and apparatus for distributing data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0905721A GB2469107B (en) 2009-04-02 2009-04-02 Method and apparatus for distributing data

Publications (3)

Publication Number Publication Date
GB0905721D0 GB0905721D0 (en) 2009-05-20
GB2469107A true GB2469107A (en) 2010-10-06
GB2469107B GB2469107B (en) 2015-01-21

Family

ID=40749982

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0905721A Expired - Fee Related GB2469107B (en) 2009-04-02 2009-04-02 Method and apparatus for distributing data

Country Status (1)

Country Link
GB (1) GB2469107B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6330252B1 (en) * 1997-06-24 2001-12-11 Hitachi, Ltd. Data broadcasting system for performing highspeed data transmission, data broadcasting apparatus and data receiving apparatus for performing high speed data transmission
WO2002049360A1 (en) * 2000-12-13 2002-06-20 THE CHINESE UNIVERSITY OF HONG KONG A body corporate of Hong Kong SAR Method and system for delivering media selections through a network
EP1433323A1 (en) * 2001-07-31 2004-06-30 Dinastech IPR Limited Method for delivering data over a network
EP1781034A1 (en) * 2004-07-27 2007-05-02 Sharp Kabushiki Kaisha Pseudo video-on-demand system, pseudo video-on-demand system control method, and program and recording medium used for the same
EP2100461A2 (en) * 2006-12-20 2009-09-16 Thomson Research Funding Corporation Video data loss recovery using low bit rate stream in an iptv system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6330252B1 (en) * 1997-06-24 2001-12-11 Hitachi, Ltd. Data broadcasting system for performing highspeed data transmission, data broadcasting apparatus and data receiving apparatus for performing high speed data transmission
WO2002049360A1 (en) * 2000-12-13 2002-06-20 THE CHINESE UNIVERSITY OF HONG KONG A body corporate of Hong Kong SAR Method and system for delivering media selections through a network
EP1433323A1 (en) * 2001-07-31 2004-06-30 Dinastech IPR Limited Method for delivering data over a network
EP1781034A1 (en) * 2004-07-27 2007-05-02 Sharp Kabushiki Kaisha Pseudo video-on-demand system, pseudo video-on-demand system control method, and program and recording medium used for the same
EP2100461A2 (en) * 2006-12-20 2009-09-16 Thomson Research Funding Corporation Video data loss recovery using low bit rate stream in an iptv system

Also Published As

Publication number Publication date
GB0905721D0 (en) 2009-05-20
GB2469107B (en) 2015-01-21

Similar Documents

Publication Publication Date Title
JP6944485B2 (en) Requests for multiple chunks to a network node based on a single request message
US6708213B1 (en) Method for streaming multimedia information over public networks
US7975282B2 (en) Distributed cache algorithms and system for time-shifted, and live, peer-to-peer video streaming
CN101237429B (en) Stream media living broadcasting system, method and device based on content distribution network
CN101478556B (en) Method and apparatus for downloading peer-to-peer transmitted data slice
US20080037527A1 (en) Peer-to-Peer Interactive Media-on-Demand
US20140095593A1 (en) Method and apparatus for transmitting data file to client
US20090007196A1 (en) Method and apparatus for sharing media files among network nodes with respect to available bandwidths
WO2008011388A2 (en) Methods and apparatus for transferring data
US20130144994A1 (en) Content Delivery Network and Method for Content Delivery
CN107124668B (en) Streaming transmission device and method, streaming transmission service system, and recording medium
CN103108008A (en) Method of downloading files and file downloading system
US7991905B1 (en) Adaptively selecting timeouts for streaming media
CN106059936B (en) The method and device of cloud system Multicast File
WO2017063574A1 (en) Streaming media adaptive transmission method and device
EP2670109B1 (en) Method, system and devices for multimedia delivering in content delivery networks
CN112104885A (en) System and method for accelerating M3U8 initial playing speed in live broadcasting
JP2008522490A5 (en)
CN111193686B (en) Media stream delivery method and server
CN111193684B (en) Real-time delivery method and server of media stream
US9386056B1 (en) System, method and computer readable medium for providing media stream fragments
GB2469107A (en) Distribution and reception of plural offset data streams
CN112788135B (en) Resource scheduling method, equipment and storage medium
CN115297095A (en) Return source processing method and device, computing equipment and storage medium
EP3386160B1 (en) Providing multicast adaptive bitrate content

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20110915 AND 20110921

PCNP Patent ceased through non-payment of renewal fee

Effective date: 20170402