AU2002322988B2 - System for delivering data over a network - Google Patents

System for delivering data over a network Download PDF

Info

Publication number
AU2002322988B2
AU2002322988B2 AU2002322988A AU2002322988A AU2002322988B2 AU 2002322988 B2 AU2002322988 B2 AU 2002322988B2 AU 2002322988 A AU2002322988 A AU 2002322988A AU 2002322988 A AU2002322988 A AU 2002322988A AU 2002322988 B2 AU2002322988 B2 AU 2002322988B2
Authority
AU
Australia
Prior art keywords
data
latency
client
data streams
streams
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2002322988A
Other versions
AU2002322988A1 (en
AU2002322988C1 (en
Inventor
Gin-Man Chan
Raymond Kwong-Wing Chan
Kwok-Wai Cheung
Wing-Kai Lam
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DINASTech IPR Ltd
Original Assignee
DINASTech IPR Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/917,639 external-priority patent/US7574728B2/en
Priority claimed from US09/954,041 external-priority patent/US7200669B2/en
Application filed by DINASTech IPR Ltd filed Critical DINASTech IPR Ltd
Publication of AU2002322988A1 publication Critical patent/AU2002322988A1/en
Publication of AU2002322988B2 publication Critical patent/AU2002322988B2/en
Application granted granted Critical
Publication of AU2002322988C1 publication Critical patent/AU2002322988C1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6156Network physical structure; Signal processing specially adapted to the upstream path of the transmission network
    • H04N21/6181Network physical structure; Signal processing specially adapted to the upstream path of the transmission network involving transmission via a mobile phone network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6581Reference data, e.g. a movie identifier for ordering a movie or a product identifier in a home shopping application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17336Handling of requests in head-ends
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N2007/1739Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal the upstream communication being transmitted via a separate link, e.g. telephone line

Description

c System for Delivering Data Over a Network O Field of the Invention 0 This invention relates to methods and systems for delivering data over a network, particularly those for delivering a large amount of data with repetitive content to a large number of clients, like Video-on-Demand (VOD) systems.
00 00 Background of the Invention r, Any discussion of the prior art throughout the specification should in no way be C-i considered as an admission that such prior art is widely known or forms part of common general knowledge in the field.
Current VOD systems face a number of challenges. One of them is how to provide the clients, which may be in the number of millions, with sufficient interactivity like fast-forward/backward and/or forward/backward jump. At the same time, the provision of such functions should not impose severe network load, as the network resources namely the bandwidth may be limited. Furthermore, every client generally prefers to have the movie he selects to be started as soon as possible.
The following sections describe some of the currently used VOD systems and their possible disadvantages: 1. Near-VOD (NVOD) with regular stream-interval A NVOD system consists of staggered multicast streams with regular stream interval T (Figure The streams are multiplexed onto the same or different physical media for distribution to the users via some multiplexing mechanisms (such as timedivision multiplexing, frequency division multiplexing, code-division multiplexing, wavelength division multiplexing The distribution mechanisms include point-topoint, point-to-multipoint and other methods. Each stream is divided into regular segments of interval T, and the segments are labelled N respectively. The content that is to be distributed to the users is carried on the N segments and the content is replicated on all these streams. The content is also repeated on each stream in time.
By using such a staggered streaming arrangement with regular stream interval T, the users are guaranteed to receive the content at any time with a start-up latency less than T. However, there is no provision for user interactivity in such a system. If a user interrupts the content viewing say by pausing the display, the user cannot resume WO 03/013124 PCT/CN02/00527 the viewing at the same play point where the user pauses and is forced to skip some content to keep up with the multicast-stream that is continuously playing.
2. Quasi-VOD (QVOD) with irregular stream-interval A QVOD system consists of staggered multicast streams with irregular stream intervals (Figure The streams are multiplexed onto the same or different physical media for distribution to the users via some multiplexing mechanisms (such as timedivision multiplexing, frequency division multiplexing, code-division multiplexing, wavelength division multiplexing The distribution mechanisms include pointto-point, point-to-multipoint and other methods. Unlike the NVOD system where the streams constantly exist, the streams in a QVOD system are created on demand from the users' request for the content. The users' requests within a certain time interval Ti are batched together and served together by Stream i. The stream intervals TI, 12, Ti, are irregular. The streams (Stream 1 to i are all provided on-demand and will be removed as soon as the content distribution has been completed. The streams are constantly created as users' requests come in. By using such a staggered streaming arrangement with irregular stream interval Ti, the particular group of users starting within interval Ti is guaranteed to receive the contents within Ti (start-up latency).
Again, there is no provision for user interactivity in such a system. If a user interrupts the content viewing say by pausing the display, the user cannot resume the viewing at the same play point where the user pauses and is forced to skip some content to keep up with the multicast-stream that is continuously playing.
3. Distributed Interactive Network Architecture (DINA) DINA system refers to the method and system as described in the applicant's PCT applications PCT/IB00/001857 001858. In the DINA system, interactive functions including fast-forward/backward, forward/backward-jump, slow motions, and so on can be provided by a plurality of multicast video data streams in conjunction with a plurality of distributed interactive servers. Although interactive functions may be provided to the client in such the DINA system, the network load may increase if the start-up time for each user's request is to be reduced. This is determined by the stream interval of the multicast data streams. Generally, the number of data streams, and therefore the network load, increases with the decrease of the stream interval.
WO 03/013124 PCT/CN02/00527 In the NVOD and QVOD systems, a user wanting to view the content will simply tap into one of the many staggered streams and view the content simultaneously with all others sharing the stream. While such schemes arc simple and efficient, they suffer from two difficulties a large start-up latency and user inflexibility.
For the first difficulty, a user may have to wait as long as one stream interval T before the request is served, and the waiting time may be as large as many minutes or even hours, depending on the stream interval. Although the stream interval can be made very small, say even down to a few seconds, this also means that the system has to provide a large number of streams for serving the same amount of content. The number of streams required is simply-, where R is the length of the content and T is the stream interval. Thus, small start-up latency may incur a much higher transmission bandwidth and cost. The DINA system may also face such a difficulty.
For the second difficulty, the users viewing a multicast stream cannot freely interrupt the stream because there are other viewers. Therefore, NVOD and QVOD systems cannot allow VCR-liked interactivity such as pause, resume, rewind, slow motion, fast forward, and so on. These systems also hinder the introduction of new forms of interactive media to be deployed. In recent years, one popular approach to offer some form of VCR-liked interactivity over NVOD and QVOD systems is to add a storage unit to the set top box (STB) so as to cache all the available content being broadcast. Such systems suffer from a higher system cost and operational problems like storage unit failure and management.
It can be realised that the prior art may fail to provide a solution to the existing problems in VOD systems. Specifically, current VOD systems may not be able to provide the clients/users with desired interactive functions with a short start-up time, while at the same time minimising the network load. Therefore, it is an object of this invention to resolve at least some of the problems at set forth in the prior art. As a minimum, it is an object of this invention to provide the public with a useful choice.
-4- Summary of the invention ;A first aspect of the invention provides a system for transmitting data over a network to at lest one client having a latency time to initiate transmission of said data to
(N
Sthe client, including: at least one anti-latency signal generator for generator a plurality of anti-latency 00 00 data streams containing at least a leading portion of data for receipt by a client; NI and
(N
at least one interactive signal generator for generating a plurality of interactive Sdata streams containing at least a remaining portion of said data for the client S 10 to merge into after receiving at least a portion of an anti-latency data stream.
A second aspect of the invention provides a system for transmitting data over a network to at least one client including a signal generator for fragmenting said data into K data segments requiring a time T to transmit over the network, wherein each of the K data segments contains a head portion and a tail portion, and the head portion contains a portion of data of the tail portion of the immediate preceding segment to facilitate merging of the K data segments when received by the client.
A further aspect of the invention provides a system for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client, including: at least one anti-latency signal generator for generating at least one of antilatency data stream containing at least a leading portion of data for receipt by the client; a buffer in the client for pre-fetching the leading portion in the client as prefetched data; and at least one interactive signal generator for generating at least one interactive data stream containing at least a remaining portion of said data for the client to merge into the leading portion.
A further aspect of the invention provides a system for transmitting data over a network to at least one client including at least one anti-latency signal generator for generating a plurality of anti-latency data streams, the anti-latency data streams include: n a leading data stream containing at least one leading segment of a leading Sportion of said data being repeated continuously within the leading data 0 stream; and a plurality of finishing data streams, each of the finishing data streams: 9 containing at least the rest of the leading portion of said data; and 00 00 9 repeated continuously within said finishing data stream, and wherein r each successive finishing data stream is staggered by an anti-latency Cc time interval.
A further aspect of the invention provides a system for transmitting data over a C 10 network to at least one client including at least one anti-latency signal generator for generating a plurality of anti-latency data streams, wherein the anti-latency data streams include: M anti-latency data streams from [1 to wherein an m h anti-latency data stream has Fm segments, and Fm is an m th Fibonacci number; and wherein said Fm segments are repeated continuously within the mth anti-latency data stream.
A further aspect of the invention provides a system for transmitting data over a network to at least one client, said data being fragmented into K segments each requiring a time T to transmit over the network, including at least one anti-latency signal generator for generating a plurality of anti-latency data streams, wherein the anti-latency data streams include: M anti-latency data streams containing [1 to K] anti-latency data segments, wherein the anti-latency data segments are distributed in the M anti-latency data streams such that an kth leading segment is repeated by an anti-latency time interval kT within the anti-latency data streams.
A further aspect of the invention provides a receiver for receiving data being transmitted over a network to at least one client according to the first aspect of the invention wherein: said data is fragmented into K segments each requiring a time T to transmit over the network; the anti-latency data streams includes M anti-latency data streams; and the interactive data streams includes N interactive data streams; the receiving including: t a processor for raising a request for said data; and ;Z at least one connector for connecting the client to the M anti-latency streams and N, receiving data in the Manti-latency data streams.
A further aspect of the invention provides a receiver for receiving data being 00 00 transmitted over a network to at leat one client, wherein said data includes a leading portion and a remaining portion, and the remaining portion is transmitted by at least one interactive data stream, including: a buffer for pre-fetching the leading portion in the client as pre-fetched data; S 10 and a processor for merging the pre-fetched data to the remaining portion.
A further aspect of the invention provides a system for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client, including: at least one anti-latency signal generator for generating a plurality of antilatency data stream containing at least a leading portion of said data for receipt by the client; and at least one interactive signal generator for generating a plurality of interactive data stream containing at least a remaining portion of said data for the client to merge into after receiving at least a portion of an anti-latency data stream wherein: the leading portion of said data can be generated at regular anti-latency stream intervals; and is generated at the next earliest anti-latency stream interval after at least one client raises a request for said data.
A further aspect of the invention provides an anti-latency signal generator for generating a plurality of anti-latency data streams to transmit data over a network to at least one client, wherein the anti-latency data streams include: a leading data stream that 0 contains at least one leading segment of the leading portion of said data 0 can be generated at regular anti-latency time intervals; and S-6aare generated at the next earliest anti-latency stream interval after the Sclient raises a request for said data; 0 a plurality of finishing data streams, each of the finishing data streams: contains the rest of the leading portion of said data; corresponds to one of the leading segments; and 00 00 are generated when the corresponding leading segment is generated.
r A further aspect of the invention provides an anti-latency signal generator for generating M anti-latency data streams to transmit data over a network to at least one Sclient, wherein an m t h anti-latency data stream has Fm segments, and Fm is an m t h Fibonacci number; the Fm segments can be generated at regular anti-latency stream intervals; the first Fm segment is generated at the next earliest anti-latency stream interval when the client raises a request for said data; and subsequent segments are generated before all data in the preceding Fm segment is received by the client.
A further aspect of the invention provides a method for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client. The method of this invention includes the steps of: generating at least one of anti-latency data stream containing at least a leading portion of data for receipt by a client; and generating at least one interactive data stream containing at least a remaining portion of said data for the client to merge into after receiving at least a portion of an anti-latency data stream.
The anti-latency data streams and the interactive data streams may be generated by at least one anti-latency signal generator and at least one interactive signal generator, respectively.
t 6b A further aspect of the invention provides a method for transmitting data over a 0 network to at least one client including the step of fragmenting said data into K data IDsegments each requiring a time T to transmit over the network, wherein each of the K 0 data segments contains a head portion and a tail portion, and the head portion contain a 00 5 portion of data of the tail portion of the immediate preceding segment to facilitate 00 merging of the K data segments when received by the client.
Cc The K data segments may be generated by a signal generator.
SA further aspect of the invention provides a method for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client, including the steps of: generating at least one anti-latency data stream containing at least a leading portion of data for receipt by the client; pre-fetching the leading portion in the client as pre-fetched data; and generating at least one interactive data stream containing at least a remaining portion of said data for the client to merge into the leading portion.
A further aspect of the invention provides a method for transmitting data over a network to at least one client including the steps of generating a plurality of anti-latency data streams, in which the anti-latency data streams include: a leading data stream containing at least one leading segment of a leading portion of said data being repeated continuously within the leading data stream; and a plurality of finishing data streams, each of the finishing data streams: containing at least the rest of the leading portion of said data; and o repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by an anti-latency time interval.
t 6c A further aspect of the invention provides a method for transmitting data over a O network to at least one client. The method includes the steps of generating M anti- IND latency data streams from 1 to M, wherein an mth anti-latency data stream has Fm segments, and Fm, is an mth Fibonacci number; and wherein said Fm segments are 00 5 repeated continuously within the m'h anti-latency data stream.
00 r, A further aspect of the invention provides a method for transmitting data over a Cc network to at least one client, said data being fragmented into K segments each requiring a time T to transmit over the network. The method includes the steps of generating M Santi-latency data streams containing 1 to K anti-latency data segments, wherein the antilatency data segments are distributed in the M anti-latency data streams such that an kth leading segment is repeated by an anti-latency time interval kTwithin the anti-latency data streams.
A further aspect of the invention provides a method for receiving data being transmitted over a network to at least one client. The data to be transmitted is fragmented into K segments each requiring a time Tto transmit over the network. The data is divided into two batches of data streams, the anti-latency data streams include M anti-latency data streams, and the interactive data streams include N interactive data streams. The method for receiving the data includes the steps of: raising a request for said data. The request may be raised by a processor of the client; and connecting the client to the M anti-latency data streams and receiving data in the M anti-latency data streams. The client or the receiver may connect to the anti-latency data streams by a connector.
A further aspect of the invention provides a method for receiving data being transmitted over a network to at least one client, wherein said data includes a leading portion and a remaining portion, and the remaining portion is transmitted by at least one interactive data stream including the steps of: pre-fetching the leading portion in the client as pre-fetched data, which is contained in the buffer of the client; and merging the pre-fetched data to the remaining portion by a processor.
6d SAlternatively, instead of being generated continuously, the anti-latency data O streams can be generated upon request from the client.
O Further embodiments and options of the above methods and systems will be described in the following sections, and may then be apparent to one skilled in the art 00 00 5 after reading the description.
C' Brief description of the drawings Preferred embodiments of the present invention will now be explained by way of c, example and with reference to the accompany drawings in which: Figure 1 shows the data stream structure of a NVOD system.
Figure 2 shows the data stream structure of a QVOD system.
Figure 3 shows the overall system architecture of the data transmission system of this invention.
WO 03/013124 PCT/CN02/00527 Figure 4 shows the data streams arrangement of Configuration 1 of the data transmission system of this invention.
Figure 5 shows the data streams arrangement of Configuration 2 of the data transmission system of this invention.
Figure 6 shows the data streams arrangement of Configuration 3 of the data transmission system of this invention. Note the difference in the arrangement of the Group II data streams comparing with Figures 4 Figure 7 shows yet another Group I data streams arrangement of Configuration 3.
Figure 8 shows the data. streams arrangement of Group I data streams of Configuration 4 of the data transmission system of this invention.
Figure 9 shows yet another arrangement of Group I data streams of Configuration 4 of the data transmission system of this invention.
Figure 10 shows one of the data streams arrangement of Configuration 5 of the data transmission system of this invention. The particular arrangement of Group I data streams shown in this figure combines Configurations 1 3.
Figure 11 shows the system configuration of a multicast data streams generator of the data transmission system of this invention.
Figure 12 shows the system configuration of receiver of the data transmission system of this invention.
Figure 13 shows the local storage versus transmission bandwidth trade-off relationship.
Figure 14 shows an alternative "on-demand" approach of Configuration 1.
Figure 15 shows an alternative "on-demand" approach of Configuration 2.
Figure 16 shows an alternative "on-demand" approach of Configuration 3.
Detail Description of Preferred Embodiments This invention is now described by ways of example with reference to the figures in the following sections. Even though some of them may be readily understandable to one skilled in the art, the following Table 1 shows the abbreviations or symbols used through the specification together with their meanings so that the abbreviations or symbols may be easily referred to.
WO 03/013124 PCT/CN02/00527 Abbreviation/Symbol Meaning VOD Video-on-Demand NVOD Near Video-on-Demand QVOD Quasi Video-on-Demand DINA Distributed Interactive Network Architecture, as described in PCT applications nos. PCT/IBOO/001857 1858 VCR Video Cassette-Recorder STB Set-Top-Box DDVR Diskless Digital Video Recorder, the client of the system IVOD Instant Video-on-Demand, possible name of the system of this invention J no. of anti-latency data segments in an individual anti-latency data stream (in Configurations 1 to 3) or no. of data segments of the leading portion of the data to be transmitted (Configuration 4) K no. of data segments of the data to be transmitted M no. of anti-latency (Group I) data streams N no. of interactive (Group II) data streams Q amount of data to be transmitted R time required to transmit Q data over the network S amount of data in each data segment T time required to transmit each data segment over the network A no. of data streams in Group I(1) streams C no. of data segments in the data of Group I(1) streams B no. of data streams in Group 1(2) streams D no. of data segments in the data of Group 1(2) streams E no. of data segments in the coarse jump interval r' ann n iyiuu~ lable L. Abbreviations and Symbols use~ Although the following description refers to the data to be delivered as being video, it is expressly understood that data in other forms may also be delivered in the system of this invention, for example audio or software programs, or their combination. For instance, this invention may be used for deploying an operating system software to a large number of clients through a network upon request. Further, this invention may be utilised in data transmission systems handling a large amount of data with repetitive content, for instance in a video system bus of a computer handling many complicated but replicated 3D objects. Moreover, this invention may not be limited to the transmission of digital data only.
In this invention, a multi-stream multicasting technique is used to overcome the existing problems in VOD systems as described in the Background section. By WO 03/013124 PCT/CN02/00527 using this technique, the users are allowed VCR-liked interactivity without the need to add a storage unit at the STB and caching all the content that may be viewed by the user on a daily basis.
Figure 3 shows the system configuration. The multicast streams are generated from a multicast server unit. The streams, are multiplexed onto the physical media and distributed to the end users through a distribution network. At each user end, there is a set top box (STB), such as DDVR, that selects a multitude of streams for processing.
By arranging the content to be carried on the streams in a desired manner (as shown later in Figures 4 10), the start-up latency may be minimized while the users are provided with interactive functions. The DDVR should have sufficient bandwidth, buffer and processing capability to handle the multi-streams.
The data transmission system of this invention, which may be called an IVOD system, may look similar to the NVOD system. However, the IVOD and NVOD systems are differentiated by the following points: 1. how the content is put on the staggered streams, 2. how the staggered streams are generated, 3. how the DDVR selects and processes the multitude of staggered streams to restore the content.
The word "staggered" used above and throughout the specification in describing the data streams refers to the situation that each of the data streams begins transmission at different times. Therefore, two "frames" of two adjacent data streams, in which the term "frame" represents the repeating unit of each data stream, are separated by a time interval.
In the broad sense, the data transmission method and system may be described as providing two groups of data streams Group I and II. Group I data streams, which may be term anti-latency data streams, may serve to reduce latency for starting-up the transmission of the required data. Group I data streams may be generated by at least one anti-latency signal generator. Group II data streams, which may be termed interactive data streams, may serve to provide the desired interactive functions to the users. Group II data streams may be generated by at least one interactive signal WO 03/013124 PCT/CN02/00527 generator. For the interactive functions provide by Group II data streams, this can be referred to the applicant's PCT applications nos. PCT/IB00/001857 1858, the contents of which are now incorporated as references therein. The operation of the interactive functions is not considered to be part of the invention in this application and the details will not be further described here.
The operation of the IVOD system can best be illustrated by the following examples. Each of these examples is a valid IVOD system but they all differ in details with various tradeoffs. These examples only intend to show the working principles of IVOD systems and are not meant to describe the only possible ways of IVOD operation.
In the following examples, the content to be transmitted having a total amount of data Q requires a total time R to be transmitted over the network. The content, for example, may be a movie. The Q data is broken up into K segments each having an amount of data S. Each data segment requires a time T to be transmitted over the network. Q and S may be in the unit of megabytes, while R and T are units of time.
For the sake of convenience, the data segments of the Q data are labelled from 1 to K
R
respectively. Therefore, K The Q data may be divided into a leading portion and a remaining portion. In most cases, the Group I anti-latency data streams may contain the leading portion only. The Group II interactive data streams may contain the remaining portion or the whole set of the Q data, and this may be a matter of design choice to be determined by the system manager.
It should be noted that the system may still work if the individual data segment contains different amounts of data than each other, provided that they all required a time T for transmission. This may be achievable by controlling the transmission rate of the individual data segment. However, individual data segments may be preferred to have same amount of data S for the sake of engineering convenience. On the other hand, it may be relatively more difficult to implement the system for each of the data segments to have same amount of data S but with different transmission times.
WO 03/013124 PCT/CN02/00527 Although the following description refers to the transmission of one set of data, for instance, a movie, it should be apparent to one skilled in the art that the method and system may also be adapted to transmit a certain number of sets of data depending on, for example, the bandwidth available.
A. Dual Streaming IVOD System (Configuration 1) The simplest IVOD system is characterized by a dual-streaming operation.
Dual streaming means that each user will tap into at most two of the multicast data streams at any time. Most of the time, the user may only be tapping into one data stream.
The segments are put onto the staggered streams as shown in Figure 4. There are two groups of staggered streams. For Group I anti-latency data streams, there are J segments on each frame. T is the anti-latency time interval and may also be the upper bound for the start-up latency of the IVOD system. Each anti-latency data stream is preferably staggered by the anti-latency time interval T, although the anti-latency time interval may be set at any desired value other than T.
In this particular example, J is equal to 16 and T is 30 seconds. So the frames in each of the Group I data streams repeat themselves after a time of JT being 8 minutes. There are a total of M streams in Group I.
For Group II interactive data streams, there are N interactive data streams, with each of them being staggered by an interactive time interval. Although the interactive time interval may again be set at any desired value, the interactive time interval is preferably to be JT 8 minutes in this example) for the sake of engineering convenience. Assuming the length of the content is R (say R equals to 120
R
minutes), then there should be at least a total of 15 streams in Group II. A may
JT
be larger than this value but this may create unnecessary network load.
When a user starts to view the content at time ti, the DDVR at the user end will select one stream from Group I (Stream Ii) and one stream from Group II (Stream IIj) to tap into. Once the client connects to Streams Ii and/or IIj, the data streams are WO 03/013124 PCT/CN02/00527 processed by the DDVR, the client, and the segments are buffered according to the segment sequence number. The availability of the Group I staggered streams with stream interval T minimises the start-up latency to be equal to T.
Alternatively, the user or the client may tap into Stream Ii only and await all of the data in the leading portion to be received by the client before tapping into Stream IIj. After the DDVR has latched onto a Group I stream, the DDVR will immediately look for a suitable Group II stream for merging. In this particular case, each Group II data streams may preferably contain only the remaining portion of the Q data.
The method on merging of data streams can be found in the DINA technology.
After merging, the Group I stream may no longer be needed and the DDVR may then rely solely on Stream IIj for subsequent viewing. This may be the optimised alternative only to minimise network load.
It should be noted that once the system has started, the user could initiate the following interactive requests, including pause and resume, rewind, and slow motion playback. However, forward and backward jumps may be restricted to jump to any one of the Group I or Group II streams (at any particular time). This problem may be resolved by fine-tuning the parameters of the system. For instance, Group I data streams may be designed to contain content that relatively few people wish to look at, like copyright notices.
The total number of streams in this type of IVOD system is M The
JT
optimal system configuration is calculated to be M N J and the optimal total number of streams is given by 2 R.
B. Dual Streaming IVOD System (Configuration 2) The second example of IVOD system is also characterised by a dual-streaming operation. Again, the content is broken up into K segments of regular length T, and the WO 03/013124 PCT/CN02/00527 segments are labelled from 1 to K respectively. The segments are put onto the staggered streams in a pattern as shown in Figure In this configuration, there are also two groups of staggered streams. For Group I anti-latency data streams, there are J segments on each frame and the frames are repeated on each stream. In this example, Jis again chosen to equal to 16 and Tis seconds. This configuration characterises in that one of the Group I data streams, Stream I1, contains only Segment 1 repeated in all time slots. Streams 12 to 19 contain Segment 2 to 17. In another words, Segment 1 may be viewed as a leading data stream containing the leading segment of the leading portion. Segments 2 to 9 may be considered as a plurality of finishing data streams containing the rest of the leading portion in the number of J segments. The Group I stream interval may be chosen to be any desired value, but is again preferably set to be T due to same reason as in Configuration 1. Streams 12 to 19 repeat themselves after JT 8 minutes in this example).
In this particular example, there should be at least a total of M -+1 2 streams in Group I for the smooth merging of the leading data stream and the finishing data stream. M may be less than this value but then the user may suffer from the phenomenon of"dropping frames". M may be larger than this value but this may create unnecessary network load. This may be a matter of design choice that should be left to be determined by the system administrator.
Although the leading segment shown in Figure 5 contains only one leading segment, it should be understood that the leading data stream may contain more than one leading segment, for example, segments 1-4. The above conditions of the Group I anti-latency data streams of this Configuration 2 may then be viewed as T being four times as long, while this change may not affect the Group II interactive data streams.
In such cases, the user may suffer from a larger start-up latency. On the other hand, M may be substantially reduced and could be M +I1 for the smooth merging of the leading data stream and the finishing data stream. Although this may be less WO 03/013124 PCT/CN02/00527 desirable, this may be again a matter of design choice that should be determined by the system administrator.
For Group II streams, the arrangement and the set up of the streams may be the same as in the previous example, and the same setting and variations is also applicable to this application.
When a user starts to view the content at time ti, the DDVR at the user end will immediately tap onto Stream I1. The start-up latency should be bounded to T as the leading segment is repeated every time period T. After all data in the leading segment is received, the DDVR will also tap onto one of the Group I finishing data streams, 12 to 19 in this case. For the ease of illustration, Stream Ii is chosen. As an alternative, the DDVR may tap onto the leading data stream and one of the finishing data streams simultaneously if the DDVR is capable of doing so. In the latter case, both streams are processed by the DDVR and the segments are buffered according to the segment sequence number.
The DDVR will also tap onto one of the Group II streams (in this case Stream 112). The time at which the DDVR taps onto the Group II streams is a matter of choice it may do so: 1. immediately after tapping onto the leading-data stream Stream II 2. immediately after tapping onto one of the finishing data streams 3. after all data in the leading portion contained in Group I data streams is received by the DDVR Generally, the DDVR should tap onto one of the Group II streams at least right before all data in Group I streams is received or played by the client.
After all data in the Group I data streams has been buffered and received, the DDVR then merge onto one of the Group II streams. The merging technique is described in the DINA technology. After merging, the Group I stream Stream Ii) may no longer be needed and the DDVR may rely only on the Group II stream for subsequent viewing to save bandwidth. Any allowable interactive request received at any time can be entertained as previously shown in the DINA technology.
WO 03/013124 PCT/CN02/00527 The total number of streams in this IVOD system is N As N 2 preferably equals to the optimal configuration is given by J 2 and JT T the optimal total number of data streams of the system is equal to 2 +1.
VT
C. Dual Streaming IVOD System (Configuration 3) The third example of IVOD system is also characterised by a dual-streaming operation with the segments arranged in a hierarchical periodic frame structure with a size based on the Fibonacci numbers. Again, the content is broken up into K segments of regular length T, and the segments are labelled from 1 to N respectively. The segments are put onto the staggered streams in a pattern as shown in Figure 6. There are also two groups of staggered streams.
In this configuration, Group I data streams contains the data in the leading portion having J segments. Note that this J is slightly different from those used in Configurations 1 and 2. There are M Group I data streams labelled from 1 to M. For each of the Group I stream where m is an integer representing the stream number, the frame period is given by Fm where Fm is the m-th Fibonacci number. The first few Fibonacci numbers are shown in Table 2. The Fibonacci numbers have the property that Fy F y-1 F y-2 where y is an integer starting from 3. The Group I stream interval is again preferably set to be T as in Configurations 1 and 2. There are 12 Group I streams in this example. For Group II streams, the arrangement and the set up of the streams are similar to the previous examples, but for the sake of illustration, the Group II streams starting at Segment 81.
WO 03/013124 PCT/CN02/00527 j 1 2 3 4 5 6 7 8 9 10 11 12 Fj 1 2 3 5 8 13 21 34 55 89 144 233 Table 2. Fibonacci numbers.
The principle of operation can best be explained by the following even though many different variations are possible. When a user starts to view the content at time t, the DDVR at the user end will immediately tap onto two Group 1 data Streams I1 and 12. Both Segment 1 from Stream I1 and Segment 2 or 3 from Stream 12 will be buffered. Now there are two segments in the buffer, and Stream 12 has a frame size of 2, Stream 12 can be smoothly merged into using the methodology as described in the DINA technology. Thus, the startup latency should be bounded to T. After Segment 1 has been received, DDVR will tap onto Streams 12 and 13. Since there are only two segments in Stream 12, Segment 3 will either be buffered during the time when Segment 2 is being received, or Segment 3 will be available on Stream 12 immediately following Segment 2's completion. After both Segments 2 and 3 have been received out, the DDVR will tap onto Streams 3 and 4, and the process continues as before. Both streams are processed by the DDVR and the extra segments are buffered according to the segment sequence number.
In the above discussion, the DDVR is presumed to connect to the 1 s t and 2 nd data streams for starting-up the movie such that the latency is bounded to be T.
However, if the user wishes, he may choose to first tap onto the mint and (m+l)h data streams, wherein m is any number larger than 1. The user can still view the content but may be suffering from larger latency. This may be preferred by some users who wish to skip the first few minutes of a movie, for example.
Further, as in Configuration 2, each of the data segments shown in Figure 6 may contain more than one of the K segments of the data to be transmitted. For example, each of the data blocks as shown in Figure 6 may in fact contains 5 data segments. The above conditions of the Group I anti-latency data streams of this Configuration 3 may then be viewed as T being five times as long, while this change WO 03/013124 PCT/CN02/00527 may not affect the Group II interactive data streams. In such cases, the user may suffer from a larger start-up latency.
As an alternative, m may not have to start from 1, provided that the users can accept a larger start-up latency and trimming of data. For instance, the system administration may remove the first four Group I data streams in Figure 6. In the case of software transmission, this arrangement may not be allowed, otherwise the user may not be able to receive the complete software. However, in the case of video transmission, this may be acceptable, provided that the trimming of the video is accepted by the copyright owner.
By constructing the frame period of the streams according to the Fibonacci number after Stream I m-1 has been received, the DDVR would have buffered at least Fm F ,n-I F time slots. Using the merging methodology as described in the DINA technology, Stream I can be smoothly merged into Stream I as the frame size of Stream I is exactly Fm It is noted that. after m segments are received, exactly m more segments would have been buffered because of the dual streaming arrangement. The DDVR preferably begin to merge onto one of the Group II streams, at the very least to save bandwidth, once the number of segments buffered has exceeded the size of the Group II stream interval (in this case 80 segments are needed for an 8-minute Group II stream interval). After merging, the Group I stream Stream Ii) may no longer be needed and the DDVR may rely only on the Group II stream for subsequent viewing. Any allowable interactive request received at any time can be entertained as described in the DINA technology.
There is no optimal parameter for this Configuration. To save bandwidth, there should be no Group II data stream. However, users may only be able to enjoy limited interactivity depending on how much of the data is received and buffered in the DDVR. Specifically, the user may perform pause, resume, rewind, slow motion, and backward jump, but the user may not be able to perform fast forward and forward jump functions.
WO 03/013124 PCT/CN02/00527 The number of Group I data stream required, M, is determined by the number of Group II data streams, which is in turn to be determined manually according to various system factors. With a given start-up latency T, the total number of streams required in this IVOD system can be found by looking up the necessary fi-ame size from a table containing the relevant Fibonacci numbers. The minimal number of data 2K streams should be M such that FM N for the smooth merging between the individual Group I data streams. M may be less than this value but then the user may suffer from the phenomenon of "dropping frames". M may be larger than this value but this may create unnecessary network load. This may be a matter of design choice that should be left to be determined by the system administrator.
Using this technique, the start-up latency T can be as low as 6 seconds (with an average of 3 sec), with a Group II stream interval of 8 minutes. The total number of streams required for a 2-hour content can be as low as only 26.
An alternative arrangement for the Group I streams is shown in Figure 7.
Note that the frame structure of the streams only follows the Fibonacci sequence after Stream 4.
D. Multi-Streaming IVOD System (Configuration 4) The previous three examples show several possible implementations of the IVOD systems with dual-streaming. In fact, there are many more possible implementations of the IVOD system, each depending on a different arrangement of the segments in different streams, and on the maximum number of streams that the end user DDVR must simultaneously tap into and process. The above three examples are relatively simple to understand and implement, but the number of streams used are not optimal because of the restriction that only two maximum streams are tapped into and processed at any given time. In the current configuration, a multi-streaming IVOD system with the optimal number of streams is demonstrated.
This configuration is realized with the assumption that all the streams that carried the content are all tapped into and processed by the end user DDVR. Figure 8 shows a possible optimal arrangement of the initial thirty segments or so in various WO 03/013124 PCT/CN02/00527 streams based on the harmonic series approach. The segments are labelled 1, 2, 3, etc... The necessary and sufficient condition for guaranteeing the start up latency to be bounded within one slot interval using only an optimal number of streams is that the placement of the segments should be done in such a way that Segmentj thcjth segment from the beginning of the leading portion) should be repeated in everyj time slots or less, for allj from 1 to J. For example, Segment 1 should be repeated in every time slot in order that the start-up latency is bounded within one anti-latency interval T. Therefore, there may be a whole stream taken up by Segment 1 alone.
Segment 2 should be repeated in every other time slot in order that the second segment is available immediately after the first segment has been received. Similarly Segment 3 should be repeated in every three time slots and Segment j should be repeated in every j time slots. For j> 1, the segment j may be repeated more frequently than required. That is, thejth segment is repeated by an anti-latency time interval jT. Note that the definition of the term "anti-latency time interval" in this Configuration 4 is different from that in Configurations I to 3.
The exact stream where the segments are placed does not matter as we are assuming that all streams are being received and processed by the DDVR. The segments are buffered by the DDVR and rearranged into a suitable order. The unfilled slots in Figure 9 can contain any data or even be left unfilled.
As in Configuration 3, there is no optimal parameter for this Configuration.
To save bandwidth, there should be no Group II data stream, in which users may only be able to enjoy limited interactivity depending on how much of the data is received and buffered in the DDVR. This may not be desirable. The number of Group I data stream required, M, is determined by the number of Group II data streams, which is in turn to be determined manually according to various system factors. The total number M of streams required for carrying the J time slots can be found by summing the harmonic series from 1 to J, such that M 1j=1 This is approximately equal to y where y is the Euler's constant 0.5772...) when J is large. Even though J can be set to any desired number larger than for the sake of engineering
N
WO 03/013124 PCT/CN02/00527 convenience, it is preferred to have J which equals to the number of data
N
segments in the interactive time interval. This is the optimal number of streams required to bind the start-up latency to within one slot interval.
Further, as in Configurations 2 and 3, each of the data segments shown in Figure 8 may contain more than one of the K segments of the data to be transmitted.
For example, each of the data blocks as shown in Figure 8 may in fact contains data segments. The above conditions of the Group I anti-latency data streams of this Configuration 4 may then be viewed as T being ten times as long, while this change may not affect the Group II interactive data streams. In such cases, the user may suffer from a larger start-up latency.
Again, as an altemative, j may not have to start from 1 but any number larger than 1, provided that the users can accept a larger start-up latency. For instance, the system administration may remove the first three Group I data streams in Figure 8. In the case of software transmission, this arrangement may not be allowed, otherwise the user may not be able to receive the complete software. However, in the case of video transmission, this may be acceptable, provided that the trimming of the video is accepted by the copyright owner.
Alternatively, j may start from any number larger than 1, for example, However, this merely means that the first data segment in Figure 8 is being repeated by an anti-latency time interval of 5T instead of T, with subsequent j data segment being repeated by an anti-latency interval of This alteration should be obvious to one skilled in the art.
To create an IVOD system based on this optimal multi-streaming condition, the streams are again divided into two groups, Groups I and II. The segment arrangements of the Group I streams has been shown in Figure 8. The segment arrangements of the Group II streams are same as those shown in any one of Figures 4 to 6. When a user initiates a viewing request, all of the Group I streams should be received and processed by the DDVR. In addition, a suitable Group II stream will also be tapped into and processed. This allows a smooth merging of the Group I WO 03/013124 PCT/CN02/00527 streams (where the initial m segments are placed) into a single Group II stream. As an alternative, the tapping onto the Group II stream may await until all data in the leading portion contained in Group I streams is received by the client DDVR.
After one Group II stream interval (which is again set to be JT intentionally in this case), all the Group I streams may no. longer be needed and only a single Group II stream is needed for the continuous viewing by the user. Like before, through the use of a plurality of Group II streams, once the system has started, the user could initiate any of the allowable interactive requests, including pause and resume, rewind, and slow motion playback.
As in configuration 3, it is possible to create an IVOD system entirely based on the group I streams as illustrated previously. By doing that, the number of streams can be reduced with minimised start-up latency. However, users of such systems may be restricted to limited interactivity, as discussed in Configuration 3. Furthermore, the buffer size at the DDVR must be as large as the entire content, and the processing capability of the DDVR is more demanding for the current configuration. The decision regarding which system to deploy should be left as an option to the service provider.
It should further be noted that this multi-streaming arrangement may be used to replace the Fibonacci stream sequences (Group I streams) in Configuration 4 to further reduce the number of streams required. The condition is that the DDVR should have enough buffer and processing power to buffer and process the received data. Table 3 in the up-coming section lists some results in all various configurations.
A non-optimal multi-streaming arrangement known as the logarithmic streaming is shown in Figure 9.
E. Mixed Dual-Dual/Multi-Dual Streaming IVOD System (Configuration Configurations 3 and 4 demonstrate an IVOD system with a very short start-up latency in comparison with Configurations 1 and 2 using a comparable numbers of streams. But Configuration 1 or 2 also has an advantage over Configuration 3 or 4 they allow coarse jumping from stream to stream during the first stream interval while WO 03/013124 PCT/CN02/00527 Configuration 3 or 4 does not. In real life, the first few minutes of a content source usually contain a lot of header and information that many users may want to skip by jumping. Therefore, it is desirable to provide at least a limited jump capability for the users.
By combining Configuration 1 or 2 and 3 or 4, one may create an IVOD system with a limited jump capability even without the help of an external unicast stream. This IVOD system contains three groups of staggered streams, namely, Group I(1) and Group I(1) data streams has a total number of A data streams responsible for distributing data having C segments. Similarly, Group 1(2) data streams has a total number of B data streams responsible for distributing data having D segments, with each of the B data streams being staggered by a coarse jump interval. There are E data segments in the coarse jump interval.
To give a more concrete example, let us assume a segment size Tof 6 seconds.
Let Group I(1) contain the first 7 Fibonacci streams as shown in Configuration 3. Let Group 1(2) contain the 8 Group I streams as shown in Configuration 1 running from Segment 11 to 90, with a staggered stream interval of 10 segments. Note that Group 1(2) can contain data segments running from 1 to 90, although it may seem to be redundant. Accordingly, the frame period of Group 1(2) streams is 80 segments or 8 minutes, and this is the coarse-jump frame period allowing the user to perform a coarse-jump interactive when the DDVR is connecting to the Group I data streams.
Group II streams of Configuration 5 are identical to the Group II streams of the other configurations. In this particular example, each of the Group I streams starts from Segment 1 and going all the way to the end of the entire content. The arrangement of the streams and segments are shown in Figure With this hierarchical arrangement of streams and segments, it can be seen that the user can start at any time with a start-up latency of one segment (6 seconds in this example). Furthermore, users can coarse jump at any time within the start-up period, the time when the DDVR connects to the Group I streams. The start-up period is preferably defined to be the time within the first Group II stream interval (that is, from the 0-minute point to the 9-minute point) as in previous configurations. Each coarse jump is 1 minute apart from each other, which is determined by the coarse-jump WO 03/013124 PCT/CN02/00527 frame period. Thus, the users can skip the headers using this arrangement. The total number of streams needed for holding a two-hour content in the particular example shown in Figure 10 is Although Figure 10 only shows the combination of Configurations 3 and 1 in Group I data streams, it should be obvious to those skilled in the art that the following combinations arie also possible: a. Configurations 4 and 1 b. Configurations 3 and 2 c. Configurations 4 and 2 The number of Group I(1) data streams required, i.e. A, may be determined by
K
taking E as in configurations 3 and 4. That is, if Configuration 3 is used in Group
N
there should be A data streams in Group 1(1) such that F 2E. If Configuration 4 is used, then A Zc=1 As in Configuration 4, C, the total number of data segments to be transmitted in Group preferably equals to E. The same considerations on the number of data streams required as in Configurations 3 and 4 may also be applicable to Group I(1).
The decision regarding which combination to deploy should again be left as an option to the service provider.
Alternative Arrangements of Configurations 1, 2, and 3 For a VOD system built to serve a large number of users, the anti-latency data streams as described above may be preferred to be generated continuously such that these streams present in the system continuously, or at least during the prime time (let say, 6-1 l1pm), for users to tap into. On the other hand, if there are relatively few users in the system, say several thousand of users, or the particular program being delivered is not requested very frequently, some further bandwidth may be saved if the antilatency data streams are generated upon request of the users. This alternative approach may be beneficial to Configurations 1, 2, and 3. These are shown in Figures WO 03/013124 PCT/CN02/00527 14, 15, and 16. In these figures, the data segments in grey represent those data segments or data streams that are "turned-on" upon requests from the users.
For Configuration 1, each of the Group I anti-latency data streams is still staggered by an anti-latency stream interval T. However, as described above, not all of the Group I anti-latency data streams may present or "turned on" at all times.
Instead, they are generated upon requests from the users, and such requests are "batched" within T. It means that if the user raises a request for said data within an anti-latency stream interval, the anti-latency data stream are generated at the next earliest anti-latency stream interval. As an example, referring to Figure 14, consider users request the data time equals to 2T, 3T, and 16T. In this context, it means that users request the data between the interval 1T to 2T, 2T to 3T, and 15T to 16T, respectively. Accordingly, in this example, only streams 2, 3, and 16 are generated, or "turned on", in a snap shot of the data streams of the system, while streams 1 and 4-15 are "turned off'. As shown in Figure 14, it may seem that the resulting Group I data streams do not have a regular stream interval.
This concept may also be extended to Configurations 2 and 3.
Accordingly, in Configuration 2, not all of the leading data segment in the leading data stream, nor all of the finishing data streams may be "turned on" at all times. They are "turned on" upon requests from the users. An example is illustrated in Figure 15. It should be obvious to one -skilled in the art that each of the leading data segment relates to a corresponding finishing data stream, and this may assist in achieving the goal by ordinary programming technique. The corresponding finishing data stream should also be generated at the time when the leading data segment is generated.
Similarly, in Configuration 3, not all of the Fm segments distributed in the Group I data streams may be "turned on" at all times. An example is illustrated in Figurel6. Again, it should obvious to one skilled in the art the relationship among the group data segments, which may assist in achieving the goal by ordinary programming technique. All of the corresponding Fm segments should be generated at the appropriate time when the client raises the request. Specifically, subsequent WO 03/013124 PCT/CN02/00527 segments should be generated before all data in the preceding segment is received by the client.
Further, once the DDVR has merged with the Group II data streams, the antilatency data streams can be terminated to further minimize the bandwidth usage.
As one of the basic requirements in Configuration 4 is that the user should be able to be connected to all of the Group I data streams, this "on-demand" approach does not appear to be applicable to Configuration 4.
Although this alternative approach may seem to save some additional bandwidth in comparison with the original Configurations, they may be less preferred due to several reasons. First, it may increase the workload and the processing requirement on the server side, and the complexity in programming and implementation. Second, this may lead to the overload of the resulting system if care is not taken at the design stage in allocating the required bandwidth. Third, this alternative approach will in fact become the original Configurations when the number of requests from the user is large.
Additional Features of Individual Data Segments To facilitate the change over of the streams without incurring substantial loss of data during the transition, the beginning of each data segment, which can be termed the head portion, may contain duplicated data appearing in the tail portion of the immediate preceding segment. The amount of data to be carried in the duplicated portion may be T' (normalized with respect to the data rate of the stream), where T' is the delay that may incur during the change over of the streams. Typically, T' may be in the order of 10 20 milliseconds.
IVOD System Requirements There are several system requirements: a. The server needs to generate the appropriate multi-streams in patterns that have been illustrated in any one of Configurations 1 to 5 or such patters as may be designed.
WO 03/013124 PCT/CN02/00527 b. The distribution network should have sufficient capacity to carry all the required streams to the end user DDVR.
c. The end user DDVR should have sufficient bandwidth, buffer and processing capability to handle the multi-streams. The DDVR should also have sufficient storage to buffer at least one Group II stream interval of data from the multi-streams.
These factors may affect the service provide in choosing which configuration to deploy.
Concept of Diskless DVR Generally, the receiver DDVR may have a processor for raising request for the content, and a connector for connecting the Group I and II data streams.
For Configurations 1 and 2, it may be necessary for the DDVR to include a buffer for buffering the received Group I data streams. For Configurations 3 and 4, the DDVR should include a buffer for buffering the data received from Group I data streams. The processor will then also be responsible for processing the data to put the data in a proper order.
With the multi-streaming concept, the receiving device, the receiver, at the user end may not need to have any hard disk storage. The only memory or buffer needed at the STB, the client/receiver, may be the RAM (random-access memory) to buffer one stream interval equivalent of data. Assuming a stream interval of 8 minutes, this requires roughly 60 MB of RAM for a 1 Mb/s MPEG-4 stream. This technique can be contrasted with many VOD techniques that require a large hard disk storage (sometimes as large as 60 GB) at the STB. Therefore, this IVOD system also appears to the users like a diskless DVR. However, the system provider may choose to provide addition storage to the users in the form of hard disk or other non-volatile medium or use such other equipment as may be necessary to buffer and receive the data.
It should be further noted that there might be several options for the DDVR.
WO 03/013124 PCT/CN02/00527 First, the DDVR may be configured such that it plays the received data at a slower rate than the transmission rate of the data. The transmission rate may be expressed in -S under the condition that each data segment contains same amount of
T
data. In such cases, the DDVR may be required to have a larger buffer size to accommodate the un-received data.
Secondly, the DDVR may be configured to contain or pre-fetch at least a portion of the data in the Group I data streams, i.e. the leading portion of the data to be transmitted, for a certain period of time in its local buffer. Such data may be termed "pre-fetched data". If desired, the pre-fetched data may contain all of the data contained in the Group I data streams provided that the DDVR has adequate buffer size. In one extreme, the content of the data to be transmitted may be refreshed every day for video data, or more than once per day. In this particular example, it may be necessary for the pre-fetched data to be refreshed every day. The refresh time may be set at any desired value that may range from one day to even one year. It may be preferable to refresh the pre-fetched data during an off-peak period, like after midnight (for instance, from 01:00-06:00), or between 10:00 to 15:00, wherein the network activities resulting from clients' requests may be at a minimum. This process may be initiated by the anti-latency signal generator, the interactive signal generator, or by the client itself by a routine call procedure. In doing so, the latency time and the total number of data streams required in the network may be further reduced. This may be particularly important for VOD systems transmitting a large number of sets of data.
Trade-off of Space-Time-Bandwidth There is a trade-off relationship for different configurations of the IVOD systems of this invention among buffer storage at DDVR (space), start-up latency (time) and streams (transmission bandwidth) required. This is shown in Table 3 and further illustrated in Figure 13.
In Figure 13, the Vertex 1 may be realised as current VOD systems with all the data being sent and then stored in the STB, whether the client raises a request for the data or not. In such a case, the STB should have a relatively large buffer size. This WO 03/013124 PCT/CN02/00527 may increase the manufacturing cosis of the STB. Vertex 2 may represent the systems as described in Configurations 1-5. Under such a configuration, the requirement on the STB may be minimal while the system may be more demanding on the bandwidth. Vertex 3 may represent a hybrid system of Vertexes 1 and 2.
The decision on which "Vertex" to choose may be a matter of design choice depending on various factors including the bandwidth available, the specification of the STB, local requirements on latency and interactivity, and so on.
Content Size L 1 hr Number of Streams Required Staggered Interval 6 min 7 min 8 min 10 min 15 min Dual-Streaming Configuration T= 30 sec 22 23 24 26 34 (coarse jump 1 minute) Configuration T= 30 sec 17 17' 17 17 (coarse jump 2 minutes) Configuration T= 6 sec 20 19 18 17 16 (no coarse jump allowed) Configuration T= 6 sec 23 23 23 23 26 (coarse jump 1 minute) Configuration T 6 sec 22 22 21 20 21 (coarse jump 2 minute) Multi-Streaming Optimal Configuration T 6 sec 15 14 13 12 Configuration (no coarse jump allowed) Optimal Configuration T 6 sec 20 20 20 20 23 (coarse jump 1 minute) Optimal Configuration T= 6 sec 18 18 17 16 17 (coarse jump 2 minute) Content Size L 2 hr Number of Streams Required Staggered Interval 6 min 7 min 8 min 10 min 15 min Dual-Streaming Configuration T= 30 sec 32 31 31 32 38 (coarse jump 1 minute) Configuration T= 30 sec 27 25 24 23 24 (coarse jump 2 minute) Configuration T =6 sec 30 27 26 23 (no coarse jump allowed) Configuration T= 6 sec 33 31 30 29 32 (coarse jump 1 minute) Configuration T= 6 sec 32 30 28 26 (coarse jump 2 minute) Multi-Streaming Optimal Configuration T 6 sec 25 22 20 18 14 Configuration (no coarse jump allowed) Optimal Configuration T 6 sec 31 29 27 27 28 (coarse jump 1 minute) Optimal Configuration T= 6 sec 28 26 24 22 21 (coarse jump 2 minute) Table 3. Tradeoff among Buffer Storage (Space), Startup Latency (Time) and Streams (Transmission Bandwidth) Required WO 03/013124 PCT/CN02/00527 Application to cable, satellite and terrestrial broadcasting systems The IVOD systems of this invention may find immediate applications in existing cable TV, terrestrial broadcasting, and satellite broadcasting systems. With very little modification on the existing infrastructure, the non-interactive broadcasting, or NVOD systems may be converted into an IVOD system. Both analogue and digital transmission systems can take advantage of the multi-streaming concept. However, the discussions below will only describe system configurations for digital transmission systems.
In these digital broadcasting systems, the RF transmission bands are usually divided into 6 MHz (NTSC) or 8 MHz (PAL) channels. There can be over a hundred channels in cable TV, terrestrial or satellite broadcasting system. Figure 11 shows a typical system configuration for this multi-streaming system. It is very similar to existing broadcasting system. Only the transmission unit at the head end, which may be called an anti-latency device, and reception unit at the user end, the client/receiver, may need to be modified. At the head end, instead of sending analog signals in each channel, digital signals such as QAM are transmitted. Typically, one can put in 30 Mb/s into an RF channel. Assuming a 2-hour content, one can first use MPEG-4 or other compression algorithms to convert the analog signal into a digital stream with a bit rate of roughly 1 Mb/s. Using the Fibonacci dual-streaming (Configuration 3) or the optimal harmonic multi-streaming IVOD concept (Configuration one can place to 40 streams of the IVOD streams into a single RF channel. The contents are put into different RF channels according to the PAL NTSC SECAM standard to maintain compatibility with the existing broadcasting system, and each RF channel can contain a few hours of contents.
At the user end, the set top box should be RF-tuned to the particular RF channel of interest. Then the cable modem would filter out the 30 40 Mb/s digital streams and decode two streams at a time (for Fibonacci dual-streaming systems) or decode all the harmonic multi-streams (for harmonic multi-streaming systems). Figure 12 shows the block diagram of the STB cable modem. The STB cable modem is similar to other STB cable modems except for its processing unit which can process at least -2 multi-streams simultaneously rather than a single stream. The decoded streams would be buffered in the STB and the content would be reconstructed WO 03/013124 PCT/CN02/00527 according to the sequence number of the segments. With the hundreds of channels available in a typical broadcasting system, this translates to over 200 hours or more of fully interactive programs available to an infinite number of users.
While the preferred embodiment of the present invention has been described in detail by the examples, it is apparent that modifications and adaptations of the present invention will occur to those skilled in the art. It is to be expressly understood, however, that such modifications and adaptations are within the scope of the present invention, as set forth in the following claims. Furthermore, the embodiments of the present invention shall not be interpreted to be restricted by the examples or figures only.

Claims (128)

1. A system for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client, including: O at least one anti-latency signal generator for generating a plurality of anti- latency data stream containing at least a leading portion of data for receipt by 00 00 a client; and ,i at least one interactive signal generator for generating a plurality of Cc interactive data stream containing at least a remaining portion of said data O for the client to merge into after receiving at least a portion of an anti-latency 1 0 data stream.
2. The system of Claim 1, wherein: said data is fragmented into K segments each requiring a time Tto transmit over the network; the anti-latency data streams includes Manti-latency data streams; and the interactive data streams includes N interactive data streams.
3. The systems of Claim 1 wherein: the anti-latency data streams contains the leading portion of said data only; the interactive data streams contains a whole set of said data.
4. The systems of Claim 2, wherein: each of the M anti-latency data stream contains substantially identical data repeated continuously within said anti-latency data stream, and wherein each successive anti-latency data stream is staggered by an anti-latency time interval; and each of N interactive data stream repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval. The system of Claim 4, wherein: each of the M anti-latency data stream has J segments; and WO 03/013124 PCT/CN02/00527 the anti-latency time interval T.
6. The system of Claim 4, wherein the interactive time interval JT.
7. The system of Claim 5, wherein M J.
8. The system of Claim 7, wherein M J.
9. The system of Claim 6, wherein N JT R The system of Claim 9, wherein N JT
11. The system of Claim 8 or 10, wherein M N =J ,T
12. The system of Claim 4, wherein each of the N interactive data streams contains the whole set of said data having K segments.
13. The system of Claim 4, wherein each of the N interactive data streams contains the remaining portion of said data only.
14. The system of Claim 4, wherein: the client is connected to any one of the M anti-latency data streams when the client raises a request for said data; and the client is connected to any one of the N interactive data streams. The system of Claim 2, wherein the anti-latency data streams includes: I. a leading data stream containing at least one leading segment of the leading portion of said data being repeated continuously within the leading data stream; and WO 03/013124 PCT/CN02/00527 II. a plurality of finishing data streams, each of the finishing data streams: containing the rest of the leading portion of said data; and being repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by an anti-latency time interval; each of the N interactive data streams is repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval.
16. The system of Claim 15, wherein each of the finishing data stream has J segments; and the anti-latency time interval T.
17. The system of Claim 15, wherein the interactive time interval JT.
18. The system of Claim 16, wherein M +1 2
19. The system of Claim 18, wherein M +1. 2 The system of Claim 17, wherein N JT
21. The system of Claim 20, wherein N JT
22. The system of Claim 19 or 21, wherein J 5K
23. The system of Claim 15, wherein each of the N interactive data streams contains the whole set of said data having K segments. WO 03/013124 PCT/CN02/00527
24. The system of Claim 15, wherein each of the N interactive data streams contains the remaining portion of said data only. The system of Claim 15, wherein: the client is connected to the leading data stream when the client raises a request for said data; the client is subsequently connected to any one of the finishing data streams; and the client is connected to any one of the N interactive data streams.
26. The system of Claim 2, wherein: each of the N interactive data stream is repeated continuously within said interactive data stream, and wherein each successive interactive KT data stream is staggered by an interactive time interval N the M anti-latency data streams [1 to M] are generated such that an mth anti-latency data stream has F, segments, wherein F, is an mth Fibonacci number; and the Fm segments are repeated continuously within the mth anti- latency data stream.
27. The system of Claim 26, wherein: the client is connected to at least the mth and (m+l)th anti-latency data streams when the client raises a request for said data; the data in at least the /mth and (m+l)th anti-latency data streams is buffered in the client; the client is subsequently connected to successive anti-latency data streams; and until all data in the leading portion is received by the client.
28. The system of Claim 27, wherein: the client is connected to any one of the N interactive data streams after all data in the leading portion is received by the client. WO 03/013124 PCT/CN02/00527
29. The system of Claim 26, wherein each of the N interactive data streams contains the whole set of said data having K segments. The system of Claim 26, wherein each of the N interactive data streams contains the remaining portion of said data only. 2K
31. The system of Claim 26, wherein Fu 2 N
32. The system of Claim 26, wherein m starts from 1.
33. The system of Claim 26, wherein m starts from 4 and the repeating 1 st 2 nd and 3 r d anti-latency data streams have the following configuration: 1 |i 111 11 11 1111 1 1 1 i 1 111 11 1 1 1T 1[iI' 23 2 2 32 323 2 32 3 2 3 2 31231T3 2 32 3 2 3 2 3 2 312312 4 5 6 7 4 5 6 7 4 5 6 7 4 5 6 56745 6 7 4 5 16 71 4 15 616 1[41[1
34. The system of Claim 2, wherein: each of the-N interactive data steams is repeated continuously within said interactive data stream, and wherein each successive interactive KT data stream is staggered by an interactive time interval N in the M anti-latency data streams, I. the leading portion of said data contains [1 to] J leading data segments [labeled]; and II. the leading data segments are distributed in the M anti-latency data streams such that an j th leading segment is repeated by an anti- latency time interval< jT within the anti-latency data streams. The system of Claim 34, wherein: the client is connected to all of the M anti-latency data streams when the client raises a request for said data; and WO 03/013124 PCT/CN02/00527 the leading portion of said data in the M anti-latency data streams is buffered in the client.
36. The system of Claim 35, wherein: the client is connected to any one of the N interactive data streams after all data in the leading portion is received by the client.
37. The system of Claim 34, wherein each of the N interactive data streams contains the whole set of said data having K segments.
38. The system of Claim 34, wherein each of the N interactive data streams contains the remaining portion of said data only.
39. The system of Claim 34, wherein M 2> and J= The system of Claim 34 wherein six of the M anti-latency data streams containing the leading data segments are arranged as follows: i11T 11111 t11 i 1 1 1 111 1 1 1 111 1 1 1 1 1 1 1 1 1 1 1i 11 2 4 2 8124J 2| 4 2 8 2 4 2 324 2 8 2 4 2 2 16 24 8 2 2 4 2 4 36 9 3 l2423618 324936 3 124836 9 3 36393 6181312 136 13 5101525 |5|20j 5 10| 35 s 40 15 |205525 6S 10 5 i 7 T4 113| 17 222|2327 117 142913 3133347 1721111937 3 71413223323 381411 4344 |46|4749 151 |52 1 53| 54 s|1 57 |581 591 1 6263 I wherein those segments in blank contains any data.
41. The system of Claim 2, wherein the M anti-latency data streams contains the leading portion of said data; and further includes two batches of data streams being a 1 s t set of anti- latency data streams and a 2 n d set of anti-latency data streams. WO 03/013124 PCT/CN02/00527
42. The system of Claim 41, wherein: the 1 st anti-latency data streams have A 1 s t anti-latency data streams [from 1 to wherein I. an a th anti-latency data stream has F, segments, and Fa is an ath Fibonacci number; and II. the Fa segments are repeated continuously within the ath 1 st anti- latency data stream the 2 nd anti-latency data streams have B 2 nd anti-latency data streams, wherein each of the B 2 nd anti-latency data streams contains substantially identical data repeated continuously within said 2 nd anti- latency data stream, and wherein each successive 2 nd anti-latency data stream is staggered by a coarse-jump frame period; such that the client can perform a coarse-jump function when the client is connected to the B 2 nd anti-latency data stream.
43. The system of Claim 42, wherein: the client is connected to at least the a th and (a+l) t h 1 st anti-latency data streams when the client raises a request for said data; the data in at least the a t and (a+l)th 1 st anti-latency data streams is buffered in the client; the client is subsequently connected to successive 1 st anti-latency data streams; until all data in the A l st anti-latency data streams is received by the client.
44. The system of Claim 43, wherein: the client is connected to any one of the B 2 n d anti-latency data streams after all data in the 1 st anti-latency data streams is received by the client; and the client is connected to anyone of the N interactive data streams after all data in the connected B 2 nd anti-latency data stream is received by the client. The system of Claim 42, wherein each of the N interactive data streams contains the whole set of said data having K segments. WO 03/013124 PCT/CN02/00527
46.
47.
48.
49. The system of Claim 42, wherein each of the N interactive data streams contains the remaining portion of said data only. The system of Claim 42, wherein said coarse-jump frame period includes E data segments, and FA 2E. The system of Claim 42, wherein a starts from 1. The system of Claim 42, wherein a starts from 4 and the repeating 1 st 2 and 3 rd anti-latency data streams have the following configuration: 1111111111Y 1111111Lii L I 11111111111). ll1 iii1111 S1 1 1 1 13 1 3 1 1 1 1 1 1 1 1 121312131 21 iI312132i1 1 I321 ]2131 23 2_ 32 3 232 32|3|2|332 32|3 2 3 2 32 3 2 3 2 3 2 3 1 4 5|i |7 |4 i7 4 7 _4 6 7 4 5 6[71 4151_17 41 |5 714 15 7 4 5617 67415677456 7456 74567456745 The system of Claim 41, wherein: the 1 st anti-latency data streams have A 1 st anti-latency data streams [from 1 to wherein I. an a th anti-latency data stream has Fa segments, wherein Fa is an a t h Fibonacci number; and II. the Fa segments are repeated continuously within the a th 1 s anti- latency data stream the 2 nd anti-latency data streams have B 2 n d anti-latency data stream including I. a leading data stream containing at least one leading segment of the leading portion of said data being repeated continuously within the leading data stream; and II. a plurality of finishing data streams, each of the finishing data streams: containing the rest of the leading portion of said data; and WO 03/013124 PCT/CN02/00527 being repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by a coarse-jump frame period such that the client can perform a coarse-jump interactive function when the client is connected to the B 2 nd anti-latency data streams.
51. The system of Claim 50, wherein: the client is connected to at least the at h and (a+l)th 1 st anti-latency data streams when the client raises a request for said data; the data in at least the a th and (a+l)t h 1 st anti-latency data streams is buffered in the client; the client is subsequently connected to successive 1 st anti-latency data streams; until all data in the A 1 st anti-latency data streams is received by the client.
52. The system of Claim 51, wherein: the client is connected to the leading data stream after all data in the 1 st anti-latency data streams is received by the client; the client is subsequently connected to any one of the finishing data streams; and the client is connected to anyone of the N interactive data streams after all data in the B 2 n d anti-latency data streams is received by the client.
53. The system of Claim 50, wherein each of the N interactive data streams contains the whole set of said data having K segments.
54. The system of Claim 50, wherein each of the N interactive data streams contains the remaining portion of said data only.
55. The system of Claim 50, wherein said coarse-jump frame period includes E data segments, and FA 2E.
56. The system of Claim 50, wherein a starts from 1. WO 03/013124 PCT/CN02/00527
57. The system of Claim 50, wherein a starts from 4 and the repeating 1 st 2 n d and 3 rd data streams of the A 1 s t anti-latency data streams have the following configuration: 13 21312 1311 2 3|2 13123 1213| 3 2 13121317 |3 T313 31 23 2 2131 2 3 415|i7 1415||7 41516 171415|6T7 4|5 16 7 14156 7 415 6171415617 4
58. The system of Claim 41, wherein: the 1 s t anti-latency data streams have A 1 st anti-latency data streams, wherein, I. the A 1 s t anti-latency data streams contains [1 to] C 1 st data segments; and II. the 1 s t data segments are distributed in the A 1 st anti-latency data streams such that an cth leading segment is repeated by an anti- latency time interval cT within the A 1 s t anti-latency data streams; the 2 nd anti-latency data streams have B 2 nd anti-latency data streams, wherein each of the B 2 nd anti-latency data streams contains substantially identical data repeated continuously within said 2 nd anti- latency data stream, and wherein each successive 2 nd anti-latency data stream is staggered by a coarse-jump frame period; such that the client can perform a coarse-jump interactive function when the client is connected to the B 2 nd anti-latency data stream.
59. The system of Claim 58, wherein: the client is connected to all of the A 1 st anti-latency data streams when the client raises a request for said data; and data in the A 1 st anti-latency data streams is buffered in the client until all data in the A 1 t anti-latency data streams is received by the client. The system of Claim 59, wherein: WO 03/013124 PCT/CN02/00527 the client is connected to any one of the B 2" d anti-latency data streams after all data in the 1 anti-latency data streams is received by the client; and the client is connected to anyone of the N interactive data streams after all data in the connected B 2 nd anti-latency data stream is received by the client.
61. The system of Claim 58, wherein each of the N interactive data streams contains the whole set of said data having K segments.
62. The system of Claim 58, wherein each of the N interactive data streams contains the remaining portion of said data only.
63. The system of Claim 58, wherein said coarse-jump frame period includes E c=E l data segments, and A 2 c=1 C
64. The system of Claim 58, wherein six of the A 1 st anti-latency data streams are arranged as follows: (TTTT'1 1 i iii ililii 1 i9 33i 3 1 1 i111112191:3TLML3ll 2 4 2 S 2 4 2 16 2 4 2 S 2 42 232 2 4 2 S 2 4 216 2 4 2 8 2 4 2 64 2 4 3 |6 9 3112142 3 6 1 3 249 3 6 3 112 483 6 9 3 3639 3 6 18 3 1219 3 6 3 51101i51.51 15 0 zo I 1501301 351540l151 1 15 10o45 501 15 2055\2560 5 1 7 114|211111171|9 7 12812212326|27117 11412911313333417 171211111937317 113223323 |41 43|44 1461471491 1 1 15 521 1 54 56 57 5S 591 1 6263 wherein those segments in blank contains any data. The system of Claim 41, wherein: the 1 st anti-latency data streams have A 1 st anti-latency data streams, wherein, I. the A 1 st anti-latency data streams contains C 1 s data segments; and WO 03/013124 PCT/CN02/00527 II. the data segments I are distributed in the A 1 st anti-latency data streams such that an cth leading segment is repeated by an anti- latency time interval cT within the A 1 st anti-latency data streams; the 2 nd anti-latency data streams have B 2 nd anti-latency data stream including I. a leading data stream containing at least one leading segment of the leading portion of said data being repeated continuously within the leading data stream; and II. a plurality of finishing data streams, each of the finishing data streams: containing the rest of the leading portion of said data; and being repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by a coarse-jump frame period such that the client can perform a coarse-jump interactive function when the client is connected to the B 2 n d anti-latency data streams.
66. The system of Claim 65, wherein: the client is connected to all of the A 1 st anti-latency data streams when the client raises a request for said data; and data in the A 1 st anti-latency data streams is buffered in the client until all data in the A 1 st anti-latency data streams is received by the client.
67. The system of Claim 66, wherein: the client is connected to the leading data stream of the B 2 nd anti- latency data streams after all data in the 1 s t anti-latency data streams is received by the client; the client is subsequently connected to any one of the finishing data streams; and the client is connected to anyone of the IV interactive data streams after all data in the B 2 nd anti-latency data stream connected in step F is received by the client. WO 03/013124 PCT/CN02/00527
68. The system of Claim 65, wherein each of the N interactive data streams contains the whole set of said data having K segments.
69. The system of Claim 65, wherein each of the N interactive data streams contains the remaining portion of said data only. The system of Claim 65, wherein said coarse-jump frame period includes E -c=E 1I data segments, and A 2-c=
71. The system of Claim 67, wherein six of the A 1 st anti-latency data streams are arranged as follows: 1I 11111111lll lllllllll1 1 11111111111| 11111111111 111 2T14 28s24 216 2 41218 24232 21 121L 2 14 16 214121 8214 214 24 3]16932423 6 181 324 9 3 6 3 1248 3 6 933639 [l 31l631 12 9 3 6 3 51015|25 2 1510301 1355 4015 |10 |104550 15 20 55|25|605 s l0 s 714| 211113171 7 282223||2327 7 114 2911313113334 7 7211119|37|l| 7 |4|322 3323 384 4344 4634749 5152 53 54 5657 58 l591 i6162 63F1 I I wherein those segments in blank contains any data.
72. The system of any one of Claims 2, 4, 15, 26, 34, 41, 42, 50, 58, or wherein each of the K data segments contains a head portion and a tail portion, and the head portion contain a portion of data of the tail portion of the immediate preceding segment to facilitate merging of the K data segments when received by the client.
73. The system of any one of Claims 2, 4, 15, 26, 34, 41, 42, 50, 58, or wherein at least a portion of data in the leading portion is pre-fetched in the client. WO 03/013124 PCT/CN02/00527
74. A system for transmitting data over a network to at least one client including a signal generator for fragmenting said data into K data segments each requiring a time T to transmit over the network, wherein each of the K data segments contains a head portion and a tail portion, and the head portion contains a portion of data of the tail portion of the immediate preceding segment to facilitate merging of the K data segments when received by the client. A system for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client, including: at least one anti-latency signal generator for generating at least one of anti-latency data stream containing at least a leading portion of data for receipt by the client; a buffer in the client for pre-fetching the leading portion in the client as pre-fetched data; and at least one interactive signal generator for generating at least one interactive data stream containing at least a remaining portion of said data for the client to merge into the leading portion.
76. The system of Claim 75, wherein the pre-fetched data is refreshed during a refresh time period.
77. The system of Claim 76, wherein the refresh time period is an off-peak period.
78. The method of Claim 76, wherein pre-fetched data is refreshed once per day.
79. A system for transmitting data over a network to at least one client including at least one anti-latency signal generator for generating a plurality of anti- latency data streams, the anti-latency data streams include: a leading data stream containing at least one leading segment of a leading portion of said data being repeated continuously within the leading data stream; and a plurality of finishing data streams, each of the finishing data streams: containing at least the rest of the leading portion of said data; and WO 03/013124 PCT/CN02/00527 repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by an anti-latency time interval.
80. The system of Claim 79, wherein: the client is connected to the leading data stream when the client raises a request for said data; and the client is subsequently connected to any one of the finishing data streams.
81. The system of Claim 79, wherein said data is fragmented into K segments each requiring a time T to transmit over the network, and the anti-latency time interval T.
82. A system for transmitting data over a network to at least one client including at least one anti-latency signal generator for generating a plurality of anti- latency data streams, wherein the anti-latency data streams include: M anti-latency data streams from [1 to wherein an m'h anti-latency data stream has Fm segments, and Fm is an m th Fibonacci number; and wherein said segments are repeated continuously within the mth anti-latency data stream.
83. The system of Claim 82, wherein: the client is connected to at least the m t h and (r+l)th anti-latency data streams when the client raises a request for said data; the data in at least the mth and (m+l)th anti-latency data streams is buffered in the client; the client is subsequently connected to successive anti-latency data streams; and until all data is received by the client.
84. The system of Claim 82, wherein m starts from 1. -46- The system of Claim 82, wherein m starts from 4 and the repeating 1 st 2 nd and 3 r d anti-latency data streams have the following configuration: ib11 1 il 1 1 1 1 1 X l T1 1 11111 111 11i 1 1111l 111 S1i1213 131213 21312 3123213 2 312 31 21 3 12131213 2 1451617145l6 1 l5 617 45 617 4151 7|45 6 7 4 5 61714[.5a67 415[ 00 00
86. A system for transmitting data over a network to at least one client, said data being fragmented into K segments each requiring a time T to transmit over the network, including at least one anti-latency signal generator for generating a plurality of anti-latency data streams, wherein the anti-latency data streams include: M anti-latency data streams containing [1 to K] anti-latency data segments, wherein the anti-latency data segments are distributed in the M anti-latency data streams such that an k t h leading segment is repeated by an anti-latency time interval kT within the anti-latency data streams.
87. The system of Claim 86, wherein: the client is connected to all of the M anti-latency data streams; and said data in the Manti-latency data streams is buffered in the client when the client raises a request for said data.
88. The system of Claim 86, wherein six of the Manti-latency data streams containing the leading data segments are arranged as follows: WO 03/013124 PCT/CN02/00527 111111i1111111 111| 11| 1 ll111111l|l111111111ll 2 4 2 3 241 216 14 2 s 2 412 322 4 2 8 4 21166 2142 1 2 4 264214 6 fl 9 3s 12421 3 6 1s8|2419 3 6 3 12148 3 6 9 3 363913 6 13 129 3 6 101525 1 |520 I5 10|301 1355 40s151 1 1511045150 15 120155125160 71 1 4 1 2 1 111 3 17119 7 28s22123126|27111 7 |14291313113313417 |172111119 3731 14131223323 S41 143144 14647491 5 1 52I 53 -541 56 57 58 59 61 62 6311 wherein those segments in blank contains any data.
89. A receiver for receiving data being transmitted over a network to at least one client according to Claim 2, including: a processor for raising a request for said data; and at least one connector for connecting the client to the M anti-latency data streams and receiving data in the M anti-latency data streams.
90. The receiver of Claim 89, wherein: the connector is connected to the N interactive data streams after all data in the M anti-latency data streams is received by the receiver.
91. The receiver of Claim 89, wherein data in the leading portion is received sequentially.
92. The receiver of Claim 89, wherein the receiver connects to at least two of the anti-latency data streams simultaneously.
93. The receiver of Claim 92 further including: a buffer for buffering data in the two anti-latency data streams connected to the client that is received by the client sequentially.
94. The receiver of Claim 93, wherein the buffer includes random access memory and computer hard disk. WO 03/013124 PCT/CN02/00527 The receiver of Claim 93, wherein the buffer consists of random access memory.
96. The receiver of Claim 89, wherein the receiver connects to all of the anti- latency data streams simultaneously.
97. The receiver of Claim 96 further including: a buffer for buffering data in the anti-latency data streams connected in the client; and wherein the processor rearranges the buffered data according to a proper sequence.
98. The receiver of Claim 97, wherein the buffer includes random access memory and computer hard disk.
99. The receiver of Claim 97, wherein the buffer consists of random access memory.
100. The receiver of Claim 89, wherein at least a portion of data in the M anti- latency data streams is pre-fetched in the client as pre-fetched data.
101. The receiver of Claim 100, wherein the pre-fetched data is refreshed during a refresh time period.
102. The receiver of Claim 101, wherein the refresh time period is 01:00-06:00.
103. The receiver of Claim 101, wherein the refresh time period is 10:00-15:00.
104. A receiver for receiving data being transmitted over a network to at least one client, wherein said data includes a leading portion and a remaining portion, and the remaining portion is transmitted by at least one interactive data stream, including: a buffer for pre-fetching the leading portion in the client as pre-fetched data; and a processor for merging the pre-fetched data to the remaining portion. -49- tb105. The receiver of Claim 104, wherein the pre-fetched data is refreshed during a ;refresh time period.
106. The receiver of Claim 105, wherein the refresh time period is an off-peak period. 00 00 107. The receiver of claim 105, wherein pre-fetched data is refreshed once per day. Cc 5 108. A system for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client, including: Ni at least one anti-latency signal generator for generating a plurality of anti- latency data stream containing at lest a leading portion of said data for receipt by the client; and at least one interactive signal generator for generating a plurality of interactive data stream containing at least a remaining portion of said data for the client to merge into after receiving at least a portion of an anti-latency data stream. wherein: the leading portion of said data can be generated at anti-latency stream intervals; and is generated at the next earliest anti-latency stream interval after at least one client raises a request for said data.
109. The system of Claim 108, wherein: said data requiring a time R to be transmitted over the network is fragmented into K segments each requiring a time T to transmit over the network; the anti-latency data streams includes M anti-latency data streams, wherein each of the M anti-latency data stream S contains substantially identical data can be generated at regular anti-latency time intervals; and S are generated at the next earliest anti-latency stream interval after the client raises a request for said data; WO 03/013124 PCT/CN02/00527 the interactive data streams includes N interactive data streams, wherein each of the N interactive data stream is repeated continuously within said interactive data stream, and each successive interactive data stream is staggered by an interactive time interval.
110. The system of Claim 109, wherein: each of the M anti-latency data stream has J segments; and the anti-latency time interval T.
111. The system of Claim 110, wherein the interactive time interval JT.
112. The system of Claim 111, wherein M J. R
113. The system of Claim 110, wherein N JT
114. The system of Claim 113, wherein M =N =J
115. The system of Claim 109, wherein each of the N interactive data streams contains the whole set of said data having K segments.
116. The system of Claim 109, wherein each of the N interactive data streams contains the remaining portion of said data only.
117. The system of Claim 109, wherein: the client is connected to the M anti-latency data stream generated for the client when the client raises the request for said data; the client is connected to any one of the N interactive data streams; and the M anti-latency data stream generated for the client is terminated after the client is connected to one of the N interactive data streams.
118. The system of Claim 108, wherein: WO 03/013124 PCT/CN02/00527 said data requiring a time R to be transmitted over the network is fragmented into K segments each requiring a time T to transmit over the network; the anti-latency data streams includes M anti-latency data streams including: I. a leading data stream that contains at least one leading segment of the leading portion of said data can be generated at regular anti-latency time intervals; and are generated at the next earliest anti-latency stream interval after the client raises a request for said data; II. a plurality of finishing data streams, wherein each of the finishing data streams that: contains the rest of the leading portion of said data; corresponds to one of the leading segments; and are generated when the corresponding leading segment is generated; the interactive data streams includes N interactive data streams, wherein each of the N interactive data streams is repeated continuously within said interactive data stream, and each successive interactive data stream is staggered by an interactive time interval.
119. The system of Claim 118, wherein each of the finishing data stream has J segments; and the anti-latency time interval T.
120. The system of Claim 119, wherein the interactive time interval JT.
121. The system of Claim 119, wherein M +1 2
122. The system of Claim 120, wherein N JT WO 03/013124 PCT/CN02/00527
123. The system of Claim 120, wherein J [2K
124. The system of Claim 118, wherein each of the N interactive data streams contains the whole set of said data having K segments.
125. The system of Claim 118, wherein each of the N interactive data streams contains the remaining portion of said data only.
126. The system of Claim 118, wherein: the client is connected to the leading data segment generated for the client when the client raises the request for said data; the client is subsequently connected to the corresponding finishing data stream; the client is connected to any one of the N interactive data streams; and the leading data segment and the corresponding finishing data stream generated for the client is terminated after the client is connected to one of the N interactive data streams.
127. The system of Claim 108, wherein: said data requiring a time R to be transmitted over the network is fragmented into K segments each requiring a time T to transmit over the network; the interactive data streams includes N interactive data streams, wherein each of the N interactive data stream is repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval KT the anti-latency data streams includes M anti-latency data streams, such that an mth anti-latency data stream has Fm segments, wherein Fm is an mth Fibonacci number; WO 03/013124 PCT/CN02/00527 the F, segments can be generated at regular anti-latency stream intervals; the first Fm segment is generated at the next earliest anti-latency stream interval when the client raises a request for said data; and subsequent segments are generated before all data in the preceding F,n segment is received by the client.
128. The system of Claim 127, wherein: the client is connected to at least the mth and (m+l)th anti-latency data streams when the client raises a request for said data; the data in at least the mth and (m+l)th anti-latency data streams is buffered in the client; the client is subsequently connected to successive anti-latency data streams before all data in the leading portion is received by the client.
129. The system of Claim 127, wherein: the client is connected to any one of the Ninteractive data streams after all data in the leading portion is received by the client; and the M anti-latency data streams is terminated after the client is connected to one of the N interactive data streams.
130. The system of Claim 127, wherein each of the N interactive data streams contains the whole set of said data having K segments.
131. The system of Claim 127, wherein each of the N interactive data streams contains the remaining portion of said data only. 2K
132. The system of Claim 127,.wherein FM N
133. The system of Claim 127, wherein m starts from 1.
134. The system of Claim 127, wherein m starts from 4 and the repeating 1 st 2", and 3 rd anti-latency data streams have the following configuration: WO 03/013124 PCT/CN02/00527 2323232323232323232323232323232323 III3! A231 1 !213 2!3|2|3|2|3|2|3 1312|13 3 2 |31213121 3 2 1l 4567T 1415 57 141] 5 l1 7 41516 1415161714151617 455161714151 1 4
135. An anti-latency signal generator for generating a plurality of anti-latency data streams to transmit data over a network to at least one client, wherein the anti- latency data streams include: a leading data stream that contains at least one leading segment of the leading portion of said data can be generated at regular anti-latency time intervals; and are generated at the next earliest anti-latency stream interval after the client raises a request for said data; a plurality of finishing data streams, each of the finishing data streams: contains the rest of the leading portion of said data; corresponds to one of the leading segments; and are generated when the corresponding leading segment is generated.
136. The anti-latency signal generator of Claim 1-35, wherein: the client is connected to the leading data stream when the client raises a request for said data; and the client is subsequently connected to the corresponding finishing data stream.
137. The anti-latency signal generator of Claim 135, wherein said data is fragmented into K segments each requiring a time T to transmit over the network, and the anti-latency time interval T.
138. An anti-latency signal generator for generating M anti-latency data streams to transmit data over a network to at least one client, wherein an ?nth anti-latency data stream has segments, and is anm th Fibonacci number; the Fm segments can be generated at regular anti-latency stream 0 intervals; the first Fm segment is generated at the next earliest anti-latency stream interval when the client raises a request for said data; and 00 subsequent segments are generated before all data in the 00 preceding F, segment is received by the client.
139. The anti-latency signal generator of Claim 138, wherein: the client is connected to at least the mh and (m+l)h anti-latency data streams when the client raises a request for said data; the data in at least the mf and (m+l)h anti-latency data streams is buffered in the client; the client is subsequently connected to successive anti-latency data streams until all data in the leading portion is received by the client.
140. The anti-latency signal generator of Claim 138, wherein m starts from 1.
141. The anti-latency signal generator of Claim 138, wherein m starts from 4 and the repeating 2 d, and 3 r d anti-latency data streams have the following configuration: l1 1.11 1 1 1111111 1131.11 [I1 I 11111 111113. 111 I- j1F11 .11111111111 31- 1111 213!2 3 t 3 2 13121312 2 13 23 2 131213 2 1 3 !3 131213 T !2 3 2 4151 l74617141 5 7161714 51 41 51617 4 5 17 4 -56-
142. A system for transmitting data over a network to at least one client having a Slatency time to initiate transmission of said data to the client substantially as herein O described with reference to any one of the embodiments of the invention illustrated in 0 the accompanying drawings and/or examples. 00 5 143. A system for transmitting data over a network to at least one client including a 00 Ssignal generator for fragmenting said data into K data segments each requiring a time T Sto transmit over the network substantially as herein described with reference to any one N of the embodiments of the invention illustrated in the accompanying drawings and/or Oexamples.
144. A system for transmitting data over a network to at least one client including at least one anti-latency signal generator for generating a plurality of anti-latency data streams substantially as herein described with reference to any one of the embodiments of the invention illustrated in the accompanying drawings and/or examples.
145. A system for transmitting data over a network to at least one client, said data being fragmented into K segments each requiring a time T to transmit over the network, including at least one anti-latency signal generator for generating a plurality of anti- latency data streams substantially as herein described with reference to any one of the embodiments of the invention illustrated in the accompanying drawings and/or examples.
146. A receiver for receiving data being transmitted over a network to at least one client substantially as herein described with reference to any one of the embodiments of the invention illustrated in the accompanying drawings and/or examples.
147. An anti-latency signal generator for generating a plurality of anti-latency data streams to transmit data over a network to at least one client substantially as herein described with reference to any one of the embodiments of the invention illustrated in the accompanying drawings and/or examples.
148. An anti-latency signal generator for generating M anti-latency data streams to transmit data over a network to at least one client substantially as herein described with O IND reference to any one of the embodiments of the invention illustrated in the accompanying drawings and/or examples. 5 DATED this 6 th Day of October 2005 00 Shelston IP N Attorneys for: Dinastech IPR Limited e¢3
AU2002322988A 2001-07-31 2002-07-29 System for delivering data over a network Ceased AU2002322988C1 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US09/917,639 US7574728B2 (en) 2001-07-31 2001-07-31 System for delivering data over a network
US09/917,639 2001-07-31
US09/954,041 US7200669B2 (en) 2001-07-31 2001-09-18 Method and system for delivering large amounts of data with interactivity in an on-demand system
US09/954,041 2001-09-18
PCT/CN2002/000527 WO2003013124A2 (en) 2001-07-31 2002-07-29 System for delivering data over a network

Publications (3)

Publication Number Publication Date
AU2002322988A1 AU2002322988A1 (en) 2003-05-29
AU2002322988B2 true AU2002322988B2 (en) 2007-11-15
AU2002322988C1 AU2002322988C1 (en) 2008-05-22

Family

ID=27129728

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2002322988A Ceased AU2002322988C1 (en) 2001-07-31 2002-07-29 System for delivering data over a network

Country Status (7)

Country Link
EP (1) EP1433324A4 (en)
JP (1) JP2005505957A (en)
KR (1) KR100639428B1 (en)
CN (1) CN100477786C (en)
AU (1) AU2002322988C1 (en)
CA (1) CA2451901C (en)
WO (1) WO2003013124A2 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7174384B2 (en) 2001-07-31 2007-02-06 Dinastech Ipr Limited Method for delivering large amounts of data with interactivity in an on-demand system
US7200669B2 (en) * 2001-07-31 2007-04-03 Dinastech Ipr Limited Method and system for delivering large amounts of data with interactivity in an on-demand system
US7574728B2 (en) 2001-07-31 2009-08-11 Dinastech Ipr Limited System for delivering data over a network
CN1228982C (en) * 2002-12-05 2005-11-23 国际商业机器公司 Channel combination method of VOD system
US6932435B2 (en) 2003-11-07 2005-08-23 Mckechnie Vehicle Components (Usa), Inc. Adhesive patterns for vehicle wheel assemblies
EP1781034A4 (en) * 2004-07-27 2011-04-27 Sharp Kk Pseudo video-on-demand system, pseudo video-on-demand system control method, and program and recording medium used for the same
US8533765B2 (en) 2005-08-26 2013-09-10 Thomson Licensing On demand system and method using dynamic broadcast scheduling
CN101146211B (en) * 2006-09-11 2010-06-02 思华科技(上海)有限公司 Load balance system and method of VoD network
EP1914932B1 (en) * 2006-10-19 2010-12-15 Thomson Licensing Method for optimising the transmission of DVB-IP service information by partitioning into several multicast streams
EP2819364A1 (en) * 2013-06-25 2014-12-31 British Telecommunications public limited company Content distribution system and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0749241A2 (en) * 1995-06-15 1996-12-18 International Business Machines Corporation Fixed video-on-demand
US6233017B1 (en) * 1996-09-16 2001-05-15 Microsoft Corporation Multimedia compression system with adaptive block sizes

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5822530A (en) * 1995-12-14 1998-10-13 Time Warner Entertainment Co. L.P. Method and apparatus for processing requests for video on demand versions of interactive applications
JP3825099B2 (en) * 1996-09-26 2006-09-20 富士通株式会社 Video data transfer method and video server device
US6563515B1 (en) * 1998-05-19 2003-05-13 United Video Properties, Inc. Program guide system with video window browsing
KR20010080591A (en) * 1999-09-27 2001-08-22 요트.게.아. 롤페즈 Scalable system for video-on-demand
US7200669B2 (en) * 2001-07-31 2007-04-03 Dinastech Ipr Limited Method and system for delivering large amounts of data with interactivity in an on-demand system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0749241A2 (en) * 1995-06-15 1996-12-18 International Business Machines Corporation Fixed video-on-demand
US5724646A (en) * 1995-06-15 1998-03-03 International Business Machines Corporation Fixed video-on-demand
US6233017B1 (en) * 1996-09-16 2001-05-15 Microsoft Corporation Multimedia compression system with adaptive block sizes

Also Published As

Publication number Publication date
CA2451901C (en) 2010-02-16
EP1433324A2 (en) 2004-06-30
EP1433324A4 (en) 2007-04-18
KR20040041574A (en) 2004-05-17
KR100639428B1 (en) 2006-10-30
WO2003013124A2 (en) 2003-02-13
JP2005505957A (en) 2005-02-24
AU2002322988C1 (en) 2008-05-22
WO2003013124A3 (en) 2003-05-15
CN1535536A (en) 2004-10-06
CA2451901A1 (en) 2003-02-13
CN100477786C (en) 2009-04-08

Similar Documents

Publication Publication Date Title
AU2002322987B2 (en) Method for delivering data over a network
US7590751B2 (en) Method for delivering large amounts of data with interactivity in an on-demand system
AU2002322987A1 (en) Method for delivering data over a network
US8302144B2 (en) Distribution of content in an information distribution system
EP2280545B1 (en) Method and system for delivering digital content
US7047307B2 (en) Method and apparatus for broadcasting media objects with guaranteed quality of service
ZA200209223B (en) Methods for providing video-on-demand services for broadcasting systems.
WO2000074367A9 (en) Method of optimizing near-video-on-demand transmission
AU2002322988B2 (en) System for delivering data over a network
AU2002322988A1 (en) System for delivering data over a network
US7574728B2 (en) System for delivering data over a network
US20020138845A1 (en) Methods and systems for transmitting delayed access client generic data-on demand services
Thirumalai et al. Tabbycat: an inexpensive scalable server for video-on-demand
WO2001093062A1 (en) Methods for providing video-on-demand services for broadcasting systems
WO2002086673A2 (en) Transmission of delayed access client data and demand
KR20040063795A (en) Transmission of delayed access client data and demand
EP1402331A2 (en) Methods and systems for transmitting delayed access client generic data-on demand services

Legal Events

Date Code Title Description
DA2 Applications for amendment section 104

Free format text: THE NATURE OF THE AMENDMENT IS AS SHOWN IN THE STATEMENT( S) FILED 28 NOV 2007.

FGA Letters patent sealed or granted (standard patent)
DA3 Amendments made section 104

Free format text: THE NATURE OF THE AMENDMENT IS AS SHOWN IN THE STATEMENT(S) FILED 28 NOV 2007

MK14 Patent ceased section 143(a) (annual fees not paid) or expired