EP1433324A2 - System for delivering data over a network - Google Patents

System for delivering data over a network

Info

Publication number
EP1433324A2
EP1433324A2 EP02754152A EP02754152A EP1433324A2 EP 1433324 A2 EP1433324 A2 EP 1433324A2 EP 02754152 A EP02754152 A EP 02754152A EP 02754152 A EP02754152 A EP 02754152A EP 1433324 A2 EP1433324 A2 EP 1433324A2
Authority
EP
European Patent Office
Prior art keywords
data
latency
client
data streams
streams
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP02754152A
Other languages
German (de)
French (fr)
Other versions
EP1433324A4 (en
Inventor
Kwok-Wai Cheung
Raymond Kwong-Wing Chan
Gin-Man Chan
Wing-Kai Lam
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DINASTech IPR Ltd
Original Assignee
DINASTech IPR Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/917,639 external-priority patent/US7574728B2/en
Priority claimed from US09/954,041 external-priority patent/US7200669B2/en
Application filed by DINASTech IPR Ltd filed Critical DINASTech IPR Ltd
Publication of EP1433324A2 publication Critical patent/EP1433324A2/en
Publication of EP1433324A4 publication Critical patent/EP1433324A4/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6156Network physical structure; Signal processing specially adapted to the upstream path of the transmission network
    • H04N21/6181Network physical structure; Signal processing specially adapted to the upstream path of the transmission network involving transmission via a mobile phone network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6581Reference data, e.g. a movie identifier for ordering a movie or a product identifier in a home shopping application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17336Handling of requests in head-ends
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N2007/1739Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal the upstream communication being transmitted via a separate link, e.g. telephone line

Definitions

  • This invention relates to methods and systems for delivering data over a network, particularly those for delivering a large amount of data with repetitive content to a large number of clients, like Video-on-Demand (NOD) systems.
  • NOD Video-on-Demand
  • a ⁇ VOD system consists of staggered multicast streams with regular stream interval T ( Figure 1).
  • the streams are multiplexed onto the same or different physical media for distribution to the users via some multiplexing mechanisms (such as time- division multiplexing, frequency division multiplexing, code-division multiplexing, wavelength division multiplexing etc).
  • the distribution mechanisms include point- to-point, point-to-multipoint and other methods.
  • Each stream is divided into regular segments of interval T, and the segments are labelled 1, 2, 3, ..., ⁇ respectively.
  • the content that is to be distributed to the users is carried on the ⁇ segments and the content is replicated on all these streams. The content is also repeated on each stream in time.
  • QVOD Quasi-VOD with irregular stream-interval
  • a QVOD system consists of staggered multicast streams with irregular stream intervals ( Figure 2).
  • the streams are multiplexed onto the same or different physical media for distribution to the users via some multiplexing mechanisms (such as time- division multiplexing, frequency division multiplexing, code-division multiplexing, wavelength division multiplexing etc .).
  • the distribution mechanisms include point- to-point, point-to-multipoint and other methods.
  • the streams in a QVOD system are created on demand from the users' request for the content.
  • the users' requests within a certain time interval l ⁇ are batched together and served together by Stream i.
  • the stream intervals 71, 72, ... Ti, ... are irregular.
  • the streams (Stream 1 to i etc%) are all provided on-demand and will be removed as soon as the content distribution has been completed.
  • the streams are constantly created as users' requests come in.
  • the particular group of users starting within interval Ti is guaranteed to receive the contents within 71 (start-up latency).
  • start-up latency there is no provision for user interactivity in such a system. If a user interrupts the content viewing say by pausing the display, the user cannot resume the viewing at the same play point where the user pauses and is forced to skip some content to keep up with the multicast-stream that is continuously playing.
  • DINA Distributed Interactive Network Architecture
  • a user may have to wait as long as one stream interval T before the request is served, and the waiting time may be as large as many minutes or even hours, depending on the stream interval.
  • the stream interval can be made very small, say even down to a few seconds, this also means that the system has to provide a large number of streams for serving the same amount of content.
  • NVOD and QVOD systems cannot allow VCR-liked interactivity such as pause, resume, rewind, slow motion, fast forward, and so on. These systems also hinder the introduction of new forms of interactive media to be deployed.
  • one popular approach to offer some form of VCR-liked interactivity over NVOD and QVOD systems is to add a storage unit to the set top box (STB) so as to cache all the available content being broadcast.
  • STB set top box
  • this invention provides, in the broad sense, a method and the corresponding system for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client.
  • the method of this invention includes the steps of: generating at least one of anti-latency data stream containing at least a leading portion of data for receipt by a client; and generating at least one interactive data stream containing at least a remaining portion of said data for the client to merge into after receiving at least a portion of an anti-latency data stream.
  • the anti-latency data streams and the interactive data streams may be generated by at least one anti-latency signal generator and at least one interactive signal generator, respectively.
  • the K data segments may be generated by a signal generator.
  • This invention also provides a method and the corresponding system for transmitting data over a network to at least one client including the steps of generating a plurality of anti-latency data streams, in which the anti-latency data streams include: a leading data stream containing at least one leading segment of a leading portion of said data being repeated continuously within the leading data stream; and a plurality of finishing data streams, each ofthe finishing data streams:
  • each successive finishing data stream is staggered by an anti-latency time interval.
  • This invention further provides a method and the corresponding system for transmitting data over a network to at least one client.
  • the method includes the steps of generating M anti-latency data streams from 1 to M, wherein an m th anti-latency data stream has F m segments, and F m is an m th Fibonacci number; and wherein said F m segments are repeated continuously within the m anti-latency data stream.
  • the method includes the steps of generating M anti-latency data streams containing 1 to K anti-latency data segments, wherein the anti-latency data segments are distributed in the M anti-latency data streams such that an k ih leading segment is repeated by an anti-latency time interval ⁇ kT within the anti-latency data streams.
  • This invention further provides a method for receiving data being transmitted over a network to at least one client.
  • the data to be transmitted is fragmented into K segments each requiring a time T to transmit over the network.
  • the data is divided into two batches of data streams, the anti-latency data streams include M anti-latency data streams, and the interactive data streams include N interactive data streams.
  • the method for receiving the data includes the steps of: - raising a request for said data.
  • the request may be raised by a processor ofthe client; and connecting the client to the M anti-latency data streams and receiving data in the M anti-latency data streams.
  • the client or the receiver may connect to the anti-latency data streams by a connector.
  • This invention also provides a method and a corresponding system for receiving data being transmitted over a network to at least one client, wherein said data includes a leading portion and a remaining portion, and the remaining portion is transmitted by at least one interactive data stream including the steps of: - pre-fetching the leading portion in the client as pre-fetched data, which is contained in the buffer ofthe client; and merging the pre-fetched data to the remaining portion by a processor.
  • the anti-latency data streams can be generated upon request from the client.
  • Figure 1 shows the data stream structure of a NVOD system.
  • Figure 2 shows the data stream structure of a QVOD system.
  • Figure 3 shows the overall system architecture ofthe data transmission system of this invention.
  • Figure 4 shows the data streams arrangement of Configuration 1 of the data transmission system of this invention.
  • Figure 5 shows the data streams arrangement of Configuration 2 of the data transmission system of this invention.
  • Figure 6 shows the data streams arrangement of Configuration 3 of the data transmission system of this invention. Note the difference in the arrangement of the Group II data streams comparing with Figures 4 & 5.
  • Figure 7 shows yet another Group I data streams arrangement of Configuration s.
  • Figure 8 shows the data streams arrangement of Group I data streams of
  • Figure 9 shows yet another arrangement of Group I data streams of Configuration 4 ofthe data transmission system of this invention.
  • Figure 10 shows one ofthe data streams arrangement of Configuration 5 ofthe data transmission system of this invention.
  • the particular arrangement of Group I data streams shown in this figure combines Configurations 1 & 3.
  • Figure 11 shows the system configuration of a multicast data streams generator ofthe data transmission system of this invention.
  • Figure 12 shows the system configuration of receiver of the data transmission system of this invention.
  • Figure 13 shows the local storage versus transmission bandwidth trade-off relationship.
  • Figure 14 shows an alternative "on-demand" approach of Configuration 1.
  • Figure 15 shows an alternative "on-demand” approach of Configuration 2.
  • Figure 16 shows an alternative "on-demand” approach of Configuration 3.
  • this invention may be used for deploying an operating system software to a large number of clients through a network upon request. Further, this invention may be utilised in data transmission systems handling a large amount of data with repetitive content, for instance in a video system bus of a computer handling many complicated but replicated 3D objects. Moreover, this invention may not be limited to the transmission of digital data only.
  • a multi-stream multicasting technique is used to overcome the existing problems in VOD systems as described in the Background section.
  • the users are allowed VCR-liked interactivity without the need to add a storage unit at the STB and caching all the content that may be viewed by the user on a daily basis.
  • FIG 3 shows the system configuration.
  • the multicast streams are generated from a multicast server unit.
  • the streams are multiplexed onto the physical media and distributed to the end users through a distribution network.
  • a set top box such as DDVR, that selects a multitude of streams for processing.
  • STB set top box
  • the start-up latency may be mimmized while the users are provided with interactive functions.
  • the DDVR should have sufficient bandwidth, buffer and processing capability to handle the multi-streams.
  • the data transmission system of this invention which may be called an IVOD system, may look similar to the NVOD system.
  • the IVOD and NVOD systems are differentiated by the following points:
  • staggered used above and throughout the specification in describing the data streams refers to the situation that each ofthe data streams begins transmission at different times. Therefore, two “frames" of two adjacent data streams, in which the term “frame” represents the repeating unit of each data stream, are separated by a time interval.
  • the data transmission method and system may be described as providing two groups of data streams Group I and II.
  • Group I data streams which may be term anti-latency data streams, may serve to reduce latency for starting-up the transmission of the required data.
  • Group I data streams may be generated by at least one anti-latency signal generator.
  • Group II data streams which may be termed interactive data streams, may serve to provide the desired interactive functions to the users.
  • Group II data streams may be generated by at least one interactive signal generator.
  • For the interactive functions provide by Group II data streams this can be referred to the applicant's PCT applications nos. PCT/TB00/001857 & 1858, the contents of which are now incorporated as references therein. The operation of the interactive functions is not considered to be part of the invention in this application and the details will not be further described here.
  • the content to be transmitted having a total amount of data Q requires a total time R to be transmitted over the network.
  • the content for example, may be a movie.
  • the Q data is broken up into K segments each having an amount of data S. Each data segment requires a time T to be transmitted over the network.
  • Q and S may be in the unit of megabytes, while R and T are units of time.
  • the data segments ofthe Q data are labelled from 1 to K
  • the Q data may be divided into a leading portion
  • the Group I anti-latency data streams may contain the leading portion only.
  • the Group II interactive data streams may contain the remaining portion or the whole set of the Q data, and this may be a matter of design choice to be determined by the system manager.
  • the system may still work if the individual data segment contains different amounts of data than each other, provided that they all required a time T for transmission. This may be achievable by controlling the transmission rate of the individual data segment.
  • individual data segments may be preferred to have same amount of data S for the sake of engineering convenience.
  • it may be relatively more difficult to implement the system for each of the data segments to have same amount of data S but with different transmission times.
  • the following description refers to the transmission of one set of data, for instance, a movie, it should be apparent to one skilled in the art that the method and system may also be adapted to transmit a certain number of sets of data depending on, for example, the bandwidth available.
  • Dual streaming means that each user will tap into at most two of the multicast data streams at any time. Most of the time, the user may only be tapping into one data stream.
  • the segments are put onto the staggered streams as shown in Figure 4. There are two groups of staggered streams. For Group I anti-latency data streams, there are J segments on each frame. T is the anti-latency time interval and may also be the upper bound for the start-up latency of the IVOD system. Each anti-latency data stream is preferably staggered by the anti-latency time interval T, although the anti-latency time interval may be set at any desired value other than T.
  • Jis equal to 16 and T is 30 seconds. So the frames in each of the Group I data streams repeat themselves after a time of JT being 8 minutes. There are a total of M streams in Group I.
  • N interactive data streams there are N interactive data streams, with each of them being staggered by an interactive time interval.
  • the interactive time interval may again be set at any desired value, the interactive time interval is preferably to be JT (i.e. 8 minutes in this example) for the sake of engineering convenience.
  • the length ofthe content is R (say R equals to 120 p minutes)
  • there should be at least a total of — 15 streams in Group II.
  • N may
  • the DDVR at the user end will select one stream from Group I (Stream li) and one stream from Group II (Stream IIj) to tap into.
  • the client connects to Streams li and/or IIj, the data streams are processed by the DDVR, the client, and the segments are buffered according to the segment sequence number.
  • the availability of the Group I staggered streams with stream interval T minimises the start-up latency to be equal to T.
  • each Group II data streams may preferably contain only the remaining portion ofthe Q data.
  • the method on merging of data streams can be found in the DMA technology. After merging, the Group I stream may no longer be needed and the DDVR may then rely solely on Stream IIj for subsequent viewing. This may be the optimised alternative only to minimise network load.
  • the total number of streams in this type of IVOD system is M - ⁇ .
  • the second example of IVOD system is also characterised by a dual-streaming operation. Again, the content is broken up into K segments of regular length T, and the segments are labelled from 1 to K respectively. The segments are put onto the staggered streams in a pattern as shown in Figure 5.
  • Group I anti-latency data streams there are J segments on each frame and the frames are repeated on each stream.
  • Jis again chosen to equal to 16 and Tis 30 seconds.
  • This configuration characterises in that one of the Group I data streams, Stream II, contains only Segment 1 repeated in all time slots.
  • Streams 12 to 19 contain Segment 2 to 17.
  • Segment 1 may be viewed as a leading data stream containing the leading segment ofthe leading portion.
  • Segments 2 to 9 may be considered as a plurality of finishing data streams containing the rest of the leading portion in the number of J segments.
  • the Group I stream interval may be chosen to be any desired value, but is again preferably set to be T due to same reason as in Configuration 1.
  • Streams 12 to 19 repeat themselves after JT (i.e. 8 minutes in this example).
  • leading segment shown in Figure 5 contains only one leading segment, it should be understood that the leading data stream may contain more than one leading segment, for example, segments 1-4.
  • the above conditions of the Group I anti-latency data streams of this Configuration 2 may then be viewed as T being four times as long, while this change may not affect the Group II interactive data streams. In such cases, the user may suffer from a larger start-up latency.
  • M j may be substantially reduced and could be M - — + 1 for the smooth merging of the
  • the arrangement and the set up of the streams may be the same as in the previous example, and the same setting and variations is also applicable to this application.
  • the DDVR at the user end will immediately tap onto Stream II .
  • the start-up latency should be bounded to T as the leading segment is repeated every time period T.
  • the DDVR will also tap onto one of the Group I finishing data streams, 12 to 19 in this case.
  • Stream li is chosen.
  • the DDVR may tap onto the leading data stream and one of the finishing data streams simultaneously if the DDVR is capable of doing so. In the latter case, both streams are processed by the DDVR and the segments are buffered according to the segment sequence number.
  • the DDVR will also tap onto one of the Group II streams (in this case Stream 112).
  • the time at which the DDVR taps onto the Group II streams is a matter of choice - it may do so:
  • the DDVR should tap onto one ofthe Group II streams at least right before all data in Group I streams is received or played by the client.
  • the DDVR After all data in the Group I data streams has been buffered and received, the DDVR then merge onto one of the Group II streams.
  • the merging technique is described in the DMA technology. After merging, the Group I stream (i.e. Stream li) may no longer be needed and the DDVR may rely only on the Group II stream for subsequent viewing to save bandwidth. Any allowable interactive request received at any time can be entertained as previously shown in the DMA technology.
  • the total number of streams in this 1NOD system is (— + 1) + N . As N
  • the optimal total number of data streams of the system is equal to
  • the third example of INOD system is also characterised by a dual-streaming operation with the segments arranged in a hierarchical periodic frame structure with a size based on the Fibonacci numbers. Again, the content is broken up into K segments of regular length T, and the segments are labelled from 1 to N respectively. The segments are put onto the staggered streams in a pattern as shown in Figure 6. There are also two groups of staggered streams.
  • Group I data streams contains the data in the leading portion having J segments. Note that this J is slightly different from those used in Configurations 1 and 2.
  • the frame period is given by F m where F m is the m-th Fibonacci number.
  • the first few Fibonacci numbers are shown in Table 2.
  • the Group I stream interval is again preferably set to be T as in Configurations 1 and 2.
  • the arrangement and the set up ofthe streams are similar to the previous examples, but for the sake of illustration, the Group II streams starting at Segment 81.
  • Segment 3 will either be buffered during the time when Segment 2 is being received, or Segment 3 will be available on Stream 12 immediately following Segment 2's completion. After both Segments 2 and 3 have been received out, the DDVR will tap onto Streams 3 and 4, and the process continues as before. Both streams are processed by the DDVR and the extra segments are buffered according to the segment sequence number.
  • the DDVR is presumed to connect to the 1 st and 2 nd data streams for starting-up the movie such that the latency is bounded to be T.
  • the user may choose to first tap onto the m th and (m+l) th data streams, wherein m is any number larger than 1.
  • the user can still view the content but may be suffering from larger latency. This may be preferred by some users who wish to skip the first few minutes of a movie, for example.
  • each of the data segments shown in Figure 6 may contain more than one of the K segments of the data to be transmitted.
  • each of the data blocks as shown in Figure 6 may in fact contains 5 data segments.
  • the above conditions of the Group I anti-latency data streams of this Configuration 3 may then be viewed as T being five times as long, while this change may not affect the Group II interactive data streams. In such cases, the user may suffer from a larger start-up latency.
  • m may not have to start from 1, provided that the users can accept a larger start-up latency and trimming of data.
  • the system administration may remove the first four Group I data streams in Figure 6.
  • this arrangement may not be allowed, otherwise the user may not be able to receive the complete software.
  • this may be acceptable, provided that the trimming of the video is accepted by the copyright owner.
  • the DDVR preferably begin to merge onto one of the Group II streams, at the very least to save bandwidth, once the number of segments buffered has exceeded the size of the Group II stream interval (in this case 80 segments are needed for an 8-minute Group II stream interval).
  • the Group I stream i.e. Stream li
  • the DDVR may rely only on the Group II stream for subsequent viewing. Any allowable interactive request received at any time can be entertained as described in the DMA technology.
  • the number of Group I data stream required, M is determined by the number of Group II data streams, which is in turn to be determined manually according to various system factors. With a given start-up latency T, the total number of streams required in this INOD system can be found by looking up the necessary frame size from a table containing the relevant Fibonacci numbers. The minimal number of data
  • N individual Group I data streams. may be less than this value but then the user may suffer from the phenomenon of "dropping frames".
  • M may be larger than this value but this may create unnecessary network load. This may be a matter of design choice that should be left to be determined by the system administrator.
  • the start-up latency T can be as low as 6 seconds (with an average of 3 sec), with a Group LT stream interval of 8 minutes.
  • the total number of streams required for a 2-hour content can be as low as only 26.
  • FIG. 8 shows a possible optimal arrangement of the initial thirty segments or so in various streams based on the harmonic series approach.
  • the segments are labelled 1, 2, 3, ... etc...
  • the necessary and sufficient condition for guaranteeing the start up latency to be bounded within one slot interval using only an optimal number of streams is that the placement of the segments should be done in such a way that Segment y (i.e. the - th segment from the beginning of the leading portion) should be repeated in every j time slots or less, for ally from 1 to J.
  • Segment 1 should be repeated in every time slot in order that the start-up latency is bounded within one anti-latency interval T.
  • Segment 1 there may be a whole stream taken up by Segment 1 alone.
  • Segment 2 should be repeated in every other time slot in order that the second segment is available immediately after the first segment has been received.
  • Segment 3 should be repeated in every three time slots and Segment j should be repeated in every / time slots.
  • the segment j may be repeated more frequently than required. That is, the/ segment is repeated by an anti-latency time interval ⁇ jT . Note that the definition of the term "anti-latency time interval" in this Configuration 4 is different from that in Configurations 1 to 3.
  • the exact stream where the segments are placed does not matter as we are assuming that all streams are being received and processed by the DDVR.
  • the segments are buffered by the DDVR and rearranged into a suitable order.
  • the unfilled slots in Figure 9 can contain any data or even be left unfilled.
  • J can be set to any desired number larger than — , for the sake of engineering
  • J which equals to the number of data
  • Figure 8 may contain more than one of the K segments of the data to be transmitted.
  • each of the data blocks as shown in Figure 8 may in fact contains 10 data segments.
  • the above conditions of the Group I anti-latency data streams of this Configuration 4 may then be viewed as T being ten times as long, while this change may not affect the Group II interactive data streams. In such cases, the user may suffer from a larger start-up latency.
  • j may not have to start from 1 but any number larger than 1, provided that the users can accept a larger start-up latency.
  • the system administration may remove the first three Group I data streams in Figure 8.
  • this arrangement may not be allowed, otherwise the user may not be able to receive the complete software.
  • this may be acceptable, provided that the trimming of the video is accepted by the copyright owner.
  • j may start from any number larger than 1, for example, 5.
  • the streams are again divided into two groups, Groups I and II.
  • the segment arrangements of the Group I streams has been shown in Figure 8.
  • the segment arrangements ofthe Group II streams are same as those shown in any one of Figures 4 to 6.
  • a suitable Group II stream will also be tapped into and processed. This allows a smooth merging of the Group I streams (where the initial m segments are placed) into a single Group II stream.
  • the tapping onto the Group II stream may await until all data in the leading portion contained in Group I streams is received by the client DDVR.
  • All the Group I streams may no. longer be needed and only a single Group II stream is needed for the continuous viewing by the user.
  • the user could initiate any of the allowable interactive requests, including pause and resume, rewind, and slow motion playback.
  • this multi-streaming arrangement may be used to replace the Fibonacci stream sequences (Group I streams) in Configuration 4 to further reduce the number of streams required.
  • the condition is that the DDVR should have enough buffer and processing power to buffer and process the received data.
  • Table 3 in the up-coming section lists some results in all various configurations.
  • Configurations 3 and 4 demonstrate an IVOD system with a very short start-up latency in comparison with Configurations 1 and 2 using a comparable numbers of streams. But Configuration 1 or 2 also has an advantage over Configuration 3 or 4 - they allow coarse jumping from stream to stream during the first stream interval while Configuration 3 or 4 does not. In real life, the first few minutes of a content source usually contain a lot of header and information that many users may want to skip by jumping. Therefore, it is desirable to provide at least a limited jump capability for the users.
  • This INOD system contains three groups of staggered streams, namely, Group 1(1) and 1(2).
  • Group 1(1) data streams has a total number of A data streams responsible for distributing data having C segments.
  • Group 1(2) data streams has a total number of B data streams responsible for distributing data having D segments, with each of the B data streams being staggered by a coarse jump interval. There are E data segments in the coarse jump interval.
  • Group 1(1) contain the first 7 Fibonacci streams as shown in Configuration 3.
  • Group 1(2) contain the 8 Group I streams as shown in Configuration 1 mrn ing from Segment 11 to 90, with a staggered stream interval of 10 segments.
  • Group 1(2) can contain data segments running from 1 to 90, although it may seem to be redundant.
  • the frame period of Group 1(2) streams is 80 segments or 8 minutes, and this is the coarserjump frame period allowing the user to perform a coarse-jump interactive when the DDVR is connecting to the Group I data streams.
  • Group II streams of Configuration 5 are identical to the Group II streams of the other configurations. In this particular example, each of the Group II streams starts from Segment 1 and going all the way to the end of the entire content. The arrangement of the streams and segments are shown in Figure 10.
  • the user can start at any time with a start-up latency of one segment (6 seconds in this example). Furthermore, users can coarse jump at any time within the start-up period, the time when the DDVR connects to the Group I streams.
  • the start-up period is preferably defined to be the time within the first Group II stream interval (that is, from the 0-minute point to the 9-minute point) as in previous configurations. Each coarse jump is 1 minute apart from each other, which is determined by the coarse-jump frame period. Thus, the users can skip the headers using this arrangement.
  • the total number of streams needed for holding a two-hour content in the particular example shown in Figure 10 is 30.
  • the number of Group 1(1) data streams required i.e. A, may be determined by
  • Configuration 4 is used, then As m Configuration 4, C, the total number of data segments to be transmitted in Group 1(1), preferably equals to E. The same considerations on the number of data streams required as in Configurations 3 and 4 may also be applicable to Group 1(1).
  • the anti-latency data streams as described above may be preferred to be generated continuously such that these streams present in the system continuously, or at least during the prime time (let say, 6-11pm), for users to tap into.
  • the prime time let say, 6-11pm
  • some further bandwidth may be saved if the anti- latency data streams are generated upon request of the users.
  • This alternative approach may be beneficial to Configurations 1, 2, and 3. These are shown in Figures 14, 15, and 16. In these figures, the data segments in grey represent those data segments or data streams that are "turned-on" upon requests from the users.
  • each of the Group I anti-latency data streams is still staggered by an anti-latency stream interval T.
  • not all of the Group I anti-latency data streams may present or "turned on” at all times.
  • the anti- latency data streams can be terminated to further minimize the bandwidth usage.
  • Configuration 4 As one of the basic requirements in Configuration 4 is that the user should be able to be connected to all of the Group I data streams, this "on-demand" approach does not appear to be applicable to Configuration 4.
  • this alternative approach may seem to save some additional bandwidth in comparison with the original Configurations, they may be less preferred due to several reasons. First, it may increase the workload and the processing requirement on the server side, and the complexity in programming and implementation. Second, this may lead to the overload ofthe resulting system if care is not taken at the design stage in allocating the required bandwidth. Third, this alternative approach will in fact become the original Configurations when the number of requests from the user is large.
  • each data segment which can be termed the head portion, may contain duplicated data appearing in the tail portion of the immediate preceding segment.
  • the amount of data to be carried in the duplicated portion may be T' (normalized with respect to the data rate ofthe stream), where T' is the delay that may incur during the change over of the streams.
  • T' may be in the order of 10 - 20 milliseconds.
  • the server needs to generate the appropriate multi-streams in patterns that have been illustrated in any one of Configurations 1 to 5 or such patters as may be designed.
  • the distribution network should have sufficient capacity to carry all the required streams to the end user DDVR.
  • the end user DDVR should have sufficient bandwidth, buffer and processing capability to handle the multi-streams.
  • the DDVR should also have sufficient storage to buffer at least one Group II stream interval of data from the multi-streams.
  • the receiver DDVR may have a processor for raising request for the content, and a connector for connecting the Group I and II data streams.
  • the DDVR For Configurations 1 and 2, it may be necessary for the DDVR to include a buffer for buffering the received Group I data streams. For Configurations 3 and 4, the DDVR should include a buffer for buffering the data received from Group I data streams. The processor will then also be responsible for processing the data to put the data in a proper order.
  • the receiving device, the receiver, at the user end may not need to have any hard disk storage.
  • the only memory or buffer needed at the STB, the client/receiver, may be the RAM (random-access memory) to buffer one stream interval equivalent of data. Assuming a stream interval of 8 minutes, this requires roughly 60 MB of RAM for a 1 Mb/s MPEG-4 stream.
  • This technique can be contrasted with many NOD techniques that require a large hard disk storage (sometimes as large as 60 GB) at the STB. Therefore, this TvOD system also appears to the users like a diskless DVR.
  • the system provider may choose to provide addition storage to the users in the form of hard disk or other non- volatile medium or use such other equipment as may be necessary to buffer and receive the data.
  • the DDVR may be configured such that it plays the received data at a slower rate than the transmission rate of the data.
  • the transmission rate may be
  • the DDVR may be required to have a larger buffer size to accommodate the un-received data.
  • the DDVR may be configured to contain or pre-fetch at least a portion of the data in the Group I data streams, i.e. the leading portion of the data to be transmitted, for a certain period of time in its local buffer.
  • data may be termed "pre-fetched data”.
  • the pre-fetched data may contain all of the data contained in the Group I data streams provided that the DDVR has adequate buffer size.
  • the content ofthe data to be transmitted may be refreshed every day for video data, or more than once per day. In this particular example, it may be necessary for the pre-fetched data to be refreshed every day.
  • the refresh time may be set at any desired value that may range from one day to even one year.
  • This process may be initiated by the anti-latency signal generator, the interactive signal generator, or by the client itself by a routine call procedure. In doing so, the latency time and the total number of data streams required in the network may be further reduced. This may be particularly important for VOD systems transmitting a large number of sets of data.
  • Vertex 1 maybe realised as current VOD systems with all the data being sent and then stored in the STB, whether the client raises a request for the data or not. In such a case, the STB should have a relatively large buffer size. This may increase the manufacturing costs ofthe STB.
  • Vertex 2 may represent the systems as described in Configurations 1-5. Under such a configuration, the requirement on the STB may be minimal while the system may be more demanding on the bandwidth.
  • Vertex 3 may represent a hybrid system of Vertexes 1 and 2.
  • the IVOD systems of this invention may find immediate applications in existing cable TV, terrestrial broadcasting, and satellite broadcasting systems. With very little modification on the existing infrastructure, the non-interactive broadcasting, or NVOD systems may be converted into an IVOD system. Both analogue and digital transmission systems can take advantage of the multi-streaming concept. However, the discussions below will only describe system configurations for digital transmission systems.
  • the RF transmission bands are usually divided into 6 MHz (NTSC) or 8 MHz (PAL) channels.
  • NTSC 6 MHz
  • PAL 8 MHz
  • FIG 11 shows a typical system configuration for this multi-streaming system. It is very similar to existing broadcasting system. Only the transmission unit at the head end, which may be called an anti-latency device, and reception unit at the user end, the client/receiver, may need to be modified.
  • digital signals such as QAM are transmitted.
  • the set top box should be RF-tuned to the particular RF channel of interest.
  • the cable modem would filter out the 30 - 40 Mb/s digital streams and decode two streams at a time (for Fibonacci dual-streaming systems) or decode all the harmonic multi-streams (for harmonic multi-streaming systems).
  • Figure 12 shows the block diagram of the STB / cable modem.
  • the STB / cable modem is similar to other STB / cable modems except for its processing unit which can process at least 2 multi-streams simultaneously rather than a single stream.
  • the decoded streams would be buffered in the STB and the content would be reconstructed according to the sequence number of the segments. With the hundreds of channels available in a typical broadcasting system, this translates to over 200 hours or more of fully interactive programs available to an infinite number of users.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)
  • Computer And Data Communications (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

This invention describes a new method and system for delivering data over a network to a large number of clients, which may be suitable for building large-scale Video-on-Demand (VOD) systems. In current VOD systems, the client may suffer from a long latency before starting to receive the requested data that is capable of providing sufficient interactive functions, or the reverse, without significantly increasing the network load. The method utilizes two groups of data streams, one responsible for minimizing latency while the other one provides the required interactive functions. In the anti-latency data group, uniform, non-uniform or hierarchical staggered stream intervals may be used. The system of this invention may have a relatively small startup latency while users may enjoy most of the interactive functions that are typical of video recorders. Furthermore, this invention may also be able to maintain the number of data streams, or the bandwidth, required.

Description

System for Delivering Data Over a Network
Field of the Invention
This invention relates to methods and systems for delivering data over a network, particularly those for delivering a large amount of data with repetitive content to a large number of clients, like Video-on-Demand (NOD) systems.
Background of the Invention Current NOD systems face a number of challenges. One of them is how to provide the clients, which may be in the number of millions, with sufficient interactivity like fast-forward/backward and/or forward/backward-jump. At the same time, the provision of such functions should not impose severe network load, as the network resources namely the bandwidth may be limited. Furthermore, every client generally prefers to have the movie he selects to be started as soon as possible.
The following sections describe some of the currently used NOD systems and their possible disadvantages:
1. Νear-NOD (ΝNOD) with regular stream-interval
A ΝVOD system consists of staggered multicast streams with regular stream interval T (Figure 1). The streams are multiplexed onto the same or different physical media for distribution to the users via some multiplexing mechanisms (such as time- division multiplexing, frequency division multiplexing, code-division multiplexing, wavelength division multiplexing etc...). The distribution mechanisms include point- to-point, point-to-multipoint and other methods. Each stream is divided into regular segments of interval T, and the segments are labelled 1, 2, 3, ..., Ν respectively. The content that is to be distributed to the users is carried on the Ν segments and the content is replicated on all these streams. The content is also repeated on each stream in time. By using such a staggered streaming arrangement with regular stream interval T, the users are guaranteed to receive the content at any time with a start-up latency less than T. However, there is no provision for user interactivity in such a system. If a user interrupts the content viewing say by pausing the display, the user cannot resume the viewing at the same play point where the user pauses and is forced to skip some content to keep up with the multicast-stream that is continuously playing.
2. Quasi-VOD (QVOD) with irregular stream-interval A QVOD system consists of staggered multicast streams with irregular stream intervals (Figure 2). The streams are multiplexed onto the same or different physical media for distribution to the users via some multiplexing mechanisms (such as time- division multiplexing, frequency division multiplexing, code-division multiplexing, wavelength division multiplexing etc .). The distribution mechanisms include point- to-point, point-to-multipoint and other methods. Unlike the NVOD system where the streams constantly exist, the streams in a QVOD system are created on demand from the users' request for the content. The users' requests within a certain time interval lϊ are batched together and served together by Stream i. The stream intervals 71, 72, ... Ti, ... are irregular. The streams (Stream 1 to i etc...) are all provided on-demand and will be removed as soon as the content distribution has been completed. The streams are constantly created as users' requests come in. By using such a staggered streaming arrangement with irregular stream interval Ti, the particular group of users starting within interval Ti is guaranteed to receive the contents within 71 (start-up latency). Again, there is no provision for user interactivity in such a system. If a user interrupts the content viewing say by pausing the display, the user cannot resume the viewing at the same play point where the user pauses and is forced to skip some content to keep up with the multicast-stream that is continuously playing.
3. Distributed Interactive Network Architecture (DINA) DINA system refers to the method and system as described in the applicant's
PCT applications PCT/IB00/001857 & 001858. In the DMA system, interactive functions including fast-forward/backward, forward/backward-jump, slow motions, and so on can be provided by a plurality of multicast video data streams in conjunction with a plurality of distributed interactive servers. Although interactive functions may be provided to the client in such the DINA system, the network load may increase if the start-up time for each user's request is to be reduced. This is determined by the stream interval of the multicast data streams. Generally, the number of data streams, and therefore the network load, increases with the decrease of the stream interval. In the NVOD and QVOD systems, a user wanting to view the content will simply tap into one of the many staggered streams and view the content simultaneously with all others sharing the stream. While such schemes are simple and efficient, they suffer from two difficulties - a large start-up latency and user inflexibility.
For the first difficulty, a user may have to wait as long as one stream interval T before the request is served, and the waiting time may be as large as many minutes or even hours, depending on the stream interval. Although the stream interval can be made very small, say even down to a few seconds, this also means that the system has to provide a large number of streams for serving the same amount of content. The
number of streams required is simply — , where R is the length ofthe content and T is
the stream interval. Thus, small start-up latency may incur a much higher transmission bandwidth and cost. The DINA system may also face such a difficulty.
For the second difficulty, the users viewing a multicast stream cannot freely interrupt the stream because there are other viewers. Therefore, NVOD and QVOD systems cannot allow VCR-liked interactivity such as pause, resume, rewind, slow motion, fast forward, and so on. These systems also hinder the introduction of new forms of interactive media to be deployed. In recent years, one popular approach to offer some form of VCR-liked interactivity over NVOD and QVOD systems is to add a storage unit to the set top box (STB) so as to cache all the available content being broadcast. Such systems suffer from a higher system cost and operational problems like storage unit failure and management.
It can be realised that the prior art may fail to provide a solution to the existing problems in VOD systems. Specifically, current VOD systems may not be able to provide the clients/users with desired interactive functions with a short start-up time, while at the same time minimising the network load. Therefore, it is an object of this invention to resolve at least some of the problems at set forth in the prior art. As a minimum, it is an object of this invention to provide the public with a useful choice. Summary of the invention
Accordingly, this invention provides, in the broad sense, a method and the corresponding system for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client. The method of this invention includes the steps of: generating at least one of anti-latency data stream containing at least a leading portion of data for receipt by a client; and generating at least one interactive data stream containing at least a remaining portion of said data for the client to merge into after receiving at least a portion of an anti-latency data stream.
The anti-latency data streams and the interactive data streams may be generated by at least one anti-latency signal generator and at least one interactive signal generator, respectively.
It is another aspect of this invention to provide a method and the corresponding system for transmitting data over a network to at least one client including the step of fragmenting said data into K data segments each requiring a time T to transmit over the network, wherein each of the K data segments contains a head portion and a tail portion, and the head portion contain a portion of data of the tail portion of the immediate preceding segment to facilitate merging of the K data segments when received by the client.
The K data segments may be generated by a signal generator.
It is yet another aspect of this invention to provide a method and the corresponding system for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client, including the steps of: - generating at least one anti-latency data stream containing at least a leading portion of data for receipt by the client; pre- fetching the leading portion in the client as pre-fetched data; and generating at least one interactive data stream containing at least a remaining portion of said data for the client to merge into the leading portion.
This invention also provides a method and the corresponding system for transmitting data over a network to at least one client including the steps of generating a plurality of anti-latency data streams, in which the anti-latency data streams include: a leading data stream containing at least one leading segment of a leading portion of said data being repeated continuously within the leading data stream; and a plurality of finishing data streams, each ofthe finishing data streams:
• containing at least the rest of the leading portion of said data; and
• repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by an anti-latency time interval.
This invention further provides a method and the corresponding system for transmitting data over a network to at least one client. The method includes the steps of generating M anti-latency data streams from 1 to M, wherein an mth anti-latency data stream has Fm segments, and Fm is an mth Fibonacci number; and wherein said Fm segments are repeated continuously within the m anti-latency data stream.
It is yet another aspect of this invention to provide a method and the corresponding system for transmitting data over a network to at least one client, said data being fragmented into K segments each requiring a time T to transmit over the network. The method includes the steps of generating M anti-latency data streams containing 1 to K anti-latency data segments, wherein the anti-latency data segments are distributed in the M anti-latency data streams such that an kih leading segment is repeated by an anti-latency time interval < kT within the anti-latency data streams.
This invention further provides a method for receiving data being transmitted over a network to at least one client. The data to be transmitted is fragmented into K segments each requiring a time T to transmit over the network. The data is divided into two batches of data streams, the anti-latency data streams include M anti-latency data streams, and the interactive data streams include N interactive data streams. The method for receiving the data includes the steps of: - raising a request for said data. The request may be raised by a processor ofthe client; and connecting the client to the M anti-latency data streams and receiving data in the M anti-latency data streams. The client or the receiver may connect to the anti-latency data streams by a connector.
This invention also provides a method and a corresponding system for receiving data being transmitted over a network to at least one client, wherein said data includes a leading portion and a remaining portion, and the remaining portion is transmitted by at least one interactive data stream including the steps of: - pre-fetching the leading portion in the client as pre-fetched data, which is contained in the buffer ofthe client; and merging the pre-fetched data to the remaining portion by a processor.
Alternatively, instead of being generated continuously, the anti-latency data streams can be generated upon request from the client.
Further embodiments and options of the above methods and systems will be described in the following sections, and may then be apparent to one skilled in the art after reading the description.
Brief description of the drawings
Preferred embodiments ofthe present invention will now be explained by way of example and with reference to the accompany drawings in which:
Figure 1 shows the data stream structure of a NVOD system. Figure 2 shows the data stream structure of a QVOD system.
Figure 3 shows the overall system architecture ofthe data transmission system of this invention. Figure 4 shows the data streams arrangement of Configuration 1 of the data transmission system of this invention.
Figure 5 shows the data streams arrangement of Configuration 2 of the data transmission system of this invention. Figure 6 shows the data streams arrangement of Configuration 3 of the data transmission system of this invention. Note the difference in the arrangement of the Group II data streams comparing with Figures 4 & 5.
Figure 7 shows yet another Group I data streams arrangement of Configuration s. Figure 8 shows the data streams arrangement of Group I data streams of
Configuration 4 ofthe data transmission system of this invention.
Figure 9 shows yet another arrangement of Group I data streams of Configuration 4 ofthe data transmission system of this invention.
Figure 10 shows one ofthe data streams arrangement of Configuration 5 ofthe data transmission system of this invention. The particular arrangement of Group I data streams shown in this figure combines Configurations 1 & 3.
Figure 11 shows the system configuration of a multicast data streams generator ofthe data transmission system of this invention.
Figure 12 shows the system configuration of receiver of the data transmission system of this invention.
Figure 13 shows the local storage versus transmission bandwidth trade-off relationship.
Figure 14 shows an alternative "on-demand" approach of Configuration 1.
Figure 15 shows an alternative "on-demand" approach of Configuration 2. Figure 16 shows an alternative "on-demand" approach of Configuration 3.
Detail Description of Preferred Embodiments
This invention is now described by ways of example with reference to the figures in the following sections. Even though some of them may be readily understandable to one skilled in the art, the following Table 1 shows the abbreviations or symbols used through the specification together with their meanings so that the abbreviations or symbols may be easily referred to.
Although the following description refers to the data to be delivered as being video, it is expressly understood that data in other forms may also be delivered in the system of this invention, for example audio or software programs, or their combination. For instance, this invention may be used for deploying an operating system software to a large number of clients through a network upon request. Further, this invention may be utilised in data transmission systems handling a large amount of data with repetitive content, for instance in a video system bus of a computer handling many complicated but replicated 3D objects. Moreover, this invention may not be limited to the transmission of digital data only.
In this invention, a multi-stream multicasting technique is used to overcome the existing problems in VOD systems as described in the Background section. By using this technique, the users are allowed VCR-liked interactivity without the need to add a storage unit at the STB and caching all the content that may be viewed by the user on a daily basis.
Figure 3 shows the system configuration. The multicast streams are generated from a multicast server unit. The streams, are multiplexed onto the physical media and distributed to the end users through a distribution network. At each user end, there is a set top box (STB), such as DDVR, that selects a multitude of streams for processing. By arranging the content to be carried on the streams in a desired manner (as shown later in Figures 4 - 10), the start-up latency may be mimmized while the users are provided with interactive functions. The DDVR should have sufficient bandwidth, buffer and processing capability to handle the multi-streams.
The data transmission system of this invention, which may be called an IVOD system, may look similar to the NVOD system. However, the IVOD and NVOD systems are differentiated by the following points:
1. how the content is put on the staggered streams,
2. how the staggered streams are generated,
3. how the DDVR selects and processes the multitude of staggered streams to restore the content.
The word "staggered" used above and throughout the specification in describing the data streams refers to the situation that each ofthe data streams begins transmission at different times. Therefore, two "frames" of two adjacent data streams, in which the term "frame" represents the repeating unit of each data stream, are separated by a time interval.
In the broad sense, the data transmission method and system may be described as providing two groups of data streams Group I and II. Group I data streams, which may be term anti-latency data streams, may serve to reduce latency for starting-up the transmission of the required data. Group I data streams may be generated by at least one anti-latency signal generator. Group II data streams, which may be termed interactive data streams, may serve to provide the desired interactive functions to the users. Group II data streams may be generated by at least one interactive signal generator. For the interactive functions provide by Group II data streams, this can be referred to the applicant's PCT applications nos. PCT/TB00/001857 & 1858, the contents of which are now incorporated as references therein. The operation of the interactive functions is not considered to be part of the invention in this application and the details will not be further described here.
The operation of the rVOD system can best be illustrated by the following examples. Each of these examples is a valid IVOD system but they all differ in details with various tradeoffs. These examples only intend to show the working principles of IVOD systems and are not meant to describe the only possible ways of IVOD operation.
In the following examples, the content to be transmitted having a total amount of data Q requires a total time R to be transmitted over the network. The content, for example, may be a movie. The Q data is broken up into K segments each having an amount of data S. Each data segment requires a time T to be transmitted over the network. Q and S may be in the unit of megabytes, while R and T are units of time. For the sake of convenience, the data segments ofthe Q data are labelled from 1 to K
D respectively. Therefore, K = — . The Q data may be divided into a leading portion
and a remaining portion. In most cases, the Group I anti-latency data streams may contain the leading portion only. The Group II interactive data streams may contain the remaining portion or the whole set of the Q data, and this may be a matter of design choice to be determined by the system manager.
It should be noted that the system may still work if the individual data segment contains different amounts of data than each other, provided that they all required a time T for transmission. This may be achievable by controlling the transmission rate of the individual data segment. However, individual data segments may be preferred to have same amount of data S for the sake of engineering convenience. On the other hand, it may be relatively more difficult to implement the system for each of the data segments to have same amount of data S but with different transmission times. Although the following description refers to the transmission of one set of data, for instance, a movie, it should be apparent to one skilled in the art that the method and system may also be adapted to transmit a certain number of sets of data depending on, for example, the bandwidth available.
A. Dual Streaming INOD System (Configuration 1)
The simplest rVOD system is characterized by a dual-streaming operation. Dual streaming means that each user will tap into at most two of the multicast data streams at any time. Most of the time, the user may only be tapping into one data stream.
The segments are put onto the staggered streams as shown in Figure 4. There are two groups of staggered streams. For Group I anti-latency data streams, there are J segments on each frame. T is the anti-latency time interval and may also be the upper bound for the start-up latency of the IVOD system. Each anti-latency data stream is preferably staggered by the anti-latency time interval T, although the anti-latency time interval may be set at any desired value other than T.
In this particular example, Jis equal to 16 and T is 30 seconds. So the frames in each of the Group I data streams repeat themselves after a time of JT being 8 minutes. There are a total of M streams in Group I.
For Group II interactive data streams, there are N interactive data streams, with each of them being staggered by an interactive time interval. Although the interactive time interval may again be set at any desired value, the interactive time interval is preferably to be JT (i.e. 8 minutes in this example) for the sake of engineering convenience. Assuming the length ofthe content is R (say R equals to 120 p minutes), then there should be at least a total of — = 15 streams in Group II. Nmay
be larger than this value but this may create unnecessary network load.
When a user starts to view the content at time ti, the DDVR at the user end will select one stream from Group I (Stream li) and one stream from Group II (Stream IIj) to tap into. Once the client connects to Streams li and/or IIj, the data streams are processed by the DDVR, the client, and the segments are buffered according to the segment sequence number. The availability of the Group I staggered streams with stream interval T minimises the start-up latency to be equal to T.
Alternatively, the user or the client may tap into Stream li only and await all of the data in the leading portion to be received by the client before tapping into Stream IIj. After the DDVR has latched onto a Group I stream, the DDVR will immediately look for a suitable Group II stream for merging. In this particular case, each Group II data streams may preferably contain only the remaining portion ofthe Q data.
The method on merging of data streams can be found in the DMA technology. After merging, the Group I stream may no longer be needed and the DDVR may then rely solely on Stream IIj for subsequent viewing. This may be the optimised alternative only to minimise network load.
It should be noted that once the system has started, the user could initiate the following interactive requests, including pause and resume, rewind, and slow motion playback. However, forward and backward jumps may be restricted to jump to any one ofthe Group I or Group II streams (at any particular time). This problem may be resolved by fine-tuning the parameters of the system. For instance, Group I data streams may be designed to contain content that relatively few people wish to look at, like copyright notices.
D
The total number of streams in this type of IVOD system is M -\ . The
I Ό optimal system configuration is calculated to be M = N = J = ,1 — , and the optimal
total number of streams is given by 2-1 — .
B. Dual Streaming INOD System (Configuration 2)
The second example of IVOD system is also characterised by a dual-streaming operation. Again, the content is broken up into K segments of regular length T, and the segments are labelled from 1 to K respectively. The segments are put onto the staggered streams in a pattern as shown in Figure 5.
In this configuration, there are also two groups of staggered streams. For Group I anti-latency data streams, there are J segments on each frame and the frames are repeated on each stream. In this example, Jis again chosen to equal to 16 and Tis 30 seconds. This configuration characterises in that one of the Group I data streams, Stream II, contains only Segment 1 repeated in all time slots. Streams 12 to 19 contain Segment 2 to 17. In another words, Segment 1 may be viewed as a leading data stream containing the leading segment ofthe leading portion. Segments 2 to 9 may be considered as a plurality of finishing data streams containing the rest of the leading portion in the number of J segments. The Group I stream interval may be chosen to be any desired value, but is again preferably set to be T due to same reason as in Configuration 1. Streams 12 to 19 repeat themselves after JT (i.e. 8 minutes in this example).
In this particular example, there should be at least a total of M = — + 1
streams in Group I for the smooth merging of the leading data stream and the finishing data stream. Λfmay be less than this value but then the user may suffer from the phenomenon of "dropping frames". M may be larger than this value but this may create unnecessary network load. This may be a matter of design choice that should be left to be determined by the system administrator.
Although the leading segment shown in Figure 5 contains only one leading segment, it should be understood that the leading data stream may contain more than one leading segment, for example, segments 1-4. The above conditions of the Group I anti-latency data streams of this Configuration 2 may then be viewed as T being four times as long, while this change may not affect the Group II interactive data streams. In such cases, the user may suffer from a larger start-up latency. On the other hand, M j may be substantially reduced and could be M - — + 1 for the smooth merging of the
8 leading data stream and the finishing data stream. Although this may be less desirable, this may be again a matter of design choice that should be determined by the system administrator.
For Group II streams, the arrangement and the set up of the streams may be the same as in the previous example, and the same setting and variations is also applicable to this application.
When a user starts to view the content at time ti, the DDVR at the user end will immediately tap onto Stream II . The start-up latency should be bounded to T as the leading segment is repeated every time period T. After all data in the leading segment is received, the DDVR will also tap onto one of the Group I finishing data streams, 12 to 19 in this case. For the ease, of illustration, Stream li is chosen. As an alternative, the DDVR may tap onto the leading data stream and one of the finishing data streams simultaneously if the DDVR is capable of doing so. In the latter case, both streams are processed by the DDVR and the segments are buffered according to the segment sequence number.
The DDVR will also tap onto one of the Group II streams (in this case Stream 112). The time at which the DDVR taps onto the Group II streams is a matter of choice - it may do so:
1. immediately after tapping onto the leading data stream Stream II
2. immediately after tapping onto one of the finishing data streams
3. after all data in the leading portion contained in Group I data streams is received by the DDVR
Generally, the DDVR should tap onto one ofthe Group II streams at least right before all data in Group I streams is received or played by the client.
After all data in the Group I data streams has been buffered and received, the DDVR then merge onto one of the Group II streams. The merging technique is described in the DMA technology. After merging, the Group I stream (i.e. Stream li) may no longer be needed and the DDVR may rely only on the Group II stream for subsequent viewing to save bandwidth. Any allowable interactive request received at any time can be entertained as previously shown in the DMA technology. The total number of streams in this 1NOD system is (— + 1) + N . As N
n I ? /? preferably equals to — , the optimal configuration is given by J = 2 = — and
the optimal total number of data streams of the system is equal to
C. Dual Streaming INOD System (Configuration 3)
The third example of INOD system is also characterised by a dual-streaming operation with the segments arranged in a hierarchical periodic frame structure with a size based on the Fibonacci numbers. Again, the content is broken up into K segments of regular length T, and the segments are labelled from 1 to N respectively. The segments are put onto the staggered streams in a pattern as shown in Figure 6. There are also two groups of staggered streams.
In this configuration, Group I data streams contains the data in the leading portion having J segments. Note that this J is slightly different from those used in Configurations 1 and 2. There are M Group I data streams labelled from 1 to M. For each of the Group I stream lm, where m is an integer representing the stream number, the frame period is given by Fm where Fm is the m-th Fibonacci number. The first few Fibonacci numbers are shown in Table 2. The Fibonacci numbers have the property that Fy = F y_ι + F y-2 , where y is an integer starting from 3. The Group I stream interval is again preferably set to be T as in Configurations 1 and 2. There are 12 Group I streams in this example. For Group II streams, the arrangement and the set up ofthe streams are similar to the previous examples, but for the sake of illustration, the Group II streams starting at Segment 81.
Table 2. Fibonacci numbers.
The principle of operation can best be explained by the following even though many different variations are possible. When a user starts to view the content at time t, the DDVR at the user end will immediately tap onto two Group 1 data Streams II and 12. Both Segment 1 from Stream II and Segment 2 or 3 from Stream 12 will be buffered. Now there are two segments in the buffer, and Stream 12 has a frame size of 2, Stream 12 can be smoothly merged into using the methodology as described in the DMA technology. Thus, the startup latency should be bounded to T. After Segment 1 has been received, DDVR will tap onto Streams 12 and 13. Since there are only two segments in Stream 12, Segment 3 will either be buffered during the time when Segment 2 is being received, or Segment 3 will be available on Stream 12 immediately following Segment 2's completion. After both Segments 2 and 3 have been received out, the DDVR will tap onto Streams 3 and 4, and the process continues as before. Both streams are processed by the DDVR and the extra segments are buffered according to the segment sequence number.
In the above discussion, the DDVR is presumed to connect to the 1st and 2nd data streams for starting-up the movie such that the latency is bounded to be T. However, if the user wishes, he may choose to first tap onto the mth and (m+l)th data streams, wherein m is any number larger than 1. The user can still view the content but may be suffering from larger latency. This may be preferred by some users who wish to skip the first few minutes of a movie, for example.
Further, as in Configuration 2, each of the data segments shown in Figure 6 may contain more than one of the K segments of the data to be transmitted. For example, each of the data blocks as shown in Figure 6 may in fact contains 5 data segments. The above conditions of the Group I anti-latency data streams of this Configuration 3 may then be viewed as T being five times as long, while this change may not affect the Group II interactive data streams. In such cases, the user may suffer from a larger start-up latency.
As an alternative, m may not have to start from 1, provided that the users can accept a larger start-up latency and trimming of data. For instance, the system administration may remove the first four Group I data streams in Figure 6. In the case of software transmission, this arrangement may not be allowed, otherwise the user may not be able to receive the complete software. However, in the case of video transmission, this may be acceptable, provided that the trimming of the video is accepted by the copyright owner.
By constructing the frame period of the streams according to the Fibonacci number Vm , after Stream I m-1 has been received, the DDVR would have buffered at least Fm = F m.\ + F m-2 time slots. Using the merging methodology as described in the DMA technology, Stream I ,„-ι can be smoothly merged into Stream I „, as the frame size of Stream I m is exactly Fm .
It is noted that after m segments are received, exactly m more segments would have been buffered because ofthe dual streaming arrangement. The DDVR preferably begin to merge onto one of the Group II streams, at the very least to save bandwidth, once the number of segments buffered has exceeded the size of the Group II stream interval (in this case 80 segments are needed for an 8-minute Group II stream interval). After merging, the Group I stream (i.e. Stream li) may no longer be needed and the DDVR may rely only on the Group II stream for subsequent viewing. Any allowable interactive request received at any time can be entertained as described in the DMA technology.
There is no optimal parameter for this Configuration. To save bandwidth, there should be no Group II data stream. However, users may only be able to enjoy limited interactivity depending on how much of the data is received and buffered in the DDVR. Specifically, the user may perform pause, resume, rewind, slow motion, and backward jump, but the user may not be able to perform fast forward and forward jump functions. The number of Group I data stream required, M, is determined by the number of Group II data streams, which is in turn to be determined manually according to various system factors. With a given start-up latency T, the total number of streams required in this INOD system can be found by looking up the necessary frame size from a table containing the relevant Fibonacci numbers. The minimal number of data
2K streams should be M such that F^ > for the smooth merging between the
N individual Group I data streams. may be less than this value but then the user may suffer from the phenomenon of "dropping frames". M may be larger than this value but this may create unnecessary network load. This may be a matter of design choice that should be left to be determined by the system administrator.
Using this technique, the start-up latency T can be as low as 6 seconds (with an average of 3 sec), with a Group LT stream interval of 8 minutes. The total number of streams required for a 2-hour content can be as low as only 26.
An alternative arrangement for the Group I streams is shown in Figure 7. Note that the frame structure of the streams only follows the Fibonacci sequence after Stream 4.
P. Multi-Streaming INOD System (Configuration 4
The previous three examples show several possible implementations of the INOD systems with dual-streaming. In fact, there are many more possible implementations of the INOD system, each depending on a different arrangement of the segments in different streams, and on the maximum number of streams that the end user DDVR must simultaneously tap into and process. The above three examples are relatively simple to understand and implement, but the number of streams used are not optimal because of the restriction that only two maximum streams are tapped into and processed at any given time. In the current configuration, a multi-streaming IVOD system with the optimal number of streams is demonstrated.
This configuration is realized with the assumption that all the streams that carried the content are all tapped into and processed by the end user DDVR. Figure 8 shows a possible optimal arrangement of the initial thirty segments or so in various streams based on the harmonic series approach. The segments are labelled 1, 2, 3, ... etc... The necessary and sufficient condition for guaranteeing the start up latency to be bounded within one slot interval using only an optimal number of streams is that the placement of the segments should be done in such a way that Segment y (i.e. the - th segment from the beginning of the leading portion) should be repeated in every j time slots or less, for ally from 1 to J. For example, Segment 1 should be repeated in every time slot in order that the start-up latency is bounded within one anti-latency interval T. Therefore, there may be a whole stream taken up by Segment 1 alone. Segment 2 should be repeated in every other time slot in order that the second segment is available immediately after the first segment has been received. Similarly Segment 3 should be repeated in every three time slots and Segment j should be repeated in every / time slots. For j> 1, the segment j may be repeated more frequently than required. That is, the/ segment is repeated by an anti-latency time interval < jT . Note that the definition of the term "anti-latency time interval" in this Configuration 4 is different from that in Configurations 1 to 3.
The exact stream where the segments are placed does not matter as we are assuming that all streams are being received and processed by the DDVR. The segments are buffered by the DDVR and rearranged into a suitable order. The unfilled slots in Figure 9 can contain any data or even be left unfilled.
As in Configuration 3, there is no optimal parameter for this Configuration. To save bandwidth, there should be no Group II data stream, in which users may only be able to enjoy limited interactivity depending on how much of the data is received and buffered in the DDVR. This may not be desirable. The number of Group I data stream required, M, is determined by the number of Group II data streams, which is in turn to be determined manually according to various system factors. The total number M of streams required for carrying the J time slots can be found by summing the
harmonic series from 1 to J, such that ■ This is approximately equal to γ + ln(J) , where / is the Euler's constant (~ 0.5772...) when J is large. Even
though J can be set to any desired number larger than — , for the sake of engineering
N convenience, it is preferred to have J = , which equals to the number of data
segments in the interactive time interval. This is the optimal number of streams required to bind the start-up latency to within one slot interval.
Further, as in Configurations 2 and 3, each of the data segments shown in
Figure 8 may contain more than one of the K segments of the data to be transmitted. For example, each of the data blocks as shown in Figure 8 may in fact contains 10 data segments. The above conditions of the Group I anti-latency data streams of this Configuration 4 may then be viewed as T being ten times as long, while this change may not affect the Group II interactive data streams. In such cases, the user may suffer from a larger start-up latency.
Again, as an alternative, j may not have to start from 1 but any number larger than 1, provided that the users can accept a larger start-up latency. For instance, the system administration may remove the first three Group I data streams in Figure 8. In the case of software transmission, this arrangement may not be allowed, otherwise the user may not be able to receive the complete software. However, in the case of video transmission, this may be acceptable, provided that the trimming of the video is accepted by the copyright owner.
Alternatively, j may start from any number larger than 1, for example, 5.
However, this merely means that the first data segment in Figure 8 is being repeated by an anti-latency time interval of 57" instead of T, with subsequent j data segment being repeated by an anti-latency interval of (5+j)T. This alteration should be obvious to one skilled in the art.
To create an IVOD system based on this optimal multi-streaming condition, the streams are again divided into two groups, Groups I and II. The segment arrangements of the Group I streams has been shown in Figure 8. The segment arrangements ofthe Group II streams are same as those shown in any one of Figures 4 to 6. When a user initiates a viewing request, all of the Group I streams should be received and processed by the DDVR. In addition, a suitable Group II stream will also be tapped into and processed. This allows a smooth merging of the Group I streams (where the initial m segments are placed) into a single Group II stream. As an alternative, the tapping onto the Group II stream may await until all data in the leading portion contained in Group I streams is received by the client DDVR.
After one Group II stream interval (which is again set to be JT intentionally in this case), all the Group I streams may no. longer be needed and only a single Group II stream is needed for the continuous viewing by the user. Like before, through the use of a plurality of Group II streams, once the system has started, the user could initiate any of the allowable interactive requests, including pause and resume, rewind, and slow motion playback.
As in configuration 3, it is possible to create an IVOD system entirely based on the group I streams as illustrated previously. By doing that, the number of streams can be reduced with minimised start-up latency. However, users of such systems may be restricted to limited interactivity, as discussed in Configuration 3. Furthermore, the buffer size at the DDVR must be as large as the entire content, and the processing capability of the DDVR is more demanding for the current configuration. The decision regarding which system to deploy should be left as an option to the service provider.
It should further be noted that this multi-streaming arrangement may be used to replace the Fibonacci stream sequences (Group I streams) in Configuration 4 to further reduce the number of streams required. The condition is that the DDVR should have enough buffer and processing power to buffer and process the received data. Table 3 in the up-coming section lists some results in all various configurations.
A non-optimal multi-streaming arrangement known as the logarithmic streaming is shown in Figure 9.
E. Mixed Dual-Dual/Multi-Dual Streaming IVOD System (Configuration 5)
Configurations 3 and 4 demonstrate an IVOD system with a very short start-up latency in comparison with Configurations 1 and 2 using a comparable numbers of streams. But Configuration 1 or 2 also has an advantage over Configuration 3 or 4 - they allow coarse jumping from stream to stream during the first stream interval while Configuration 3 or 4 does not. In real life, the first few minutes of a content source usually contain a lot of header and information that many users may want to skip by jumping. Therefore, it is desirable to provide at least a limited jump capability for the users.
By combining Configuration 1 or 2 and 3 or 4, one may create an IVOD system with a limited jump capability even without the help of an external unicast stream. This INOD system contains three groups of staggered streams, namely, Group 1(1) and 1(2). Group 1(1) data streams has a total number of A data streams responsible for distributing data having C segments. Similarly, Group 1(2) data streams has a total number of B data streams responsible for distributing data having D segments, with each of the B data streams being staggered by a coarse jump interval. There are E data segments in the coarse jump interval.
To give a more concrete example, let us assume a segment size T of 6 seconds.
Let Group 1(1) contain the first 7 Fibonacci streams as shown in Configuration 3. Let Group 1(2) contain the 8 Group I streams as shown in Configuration 1 mrn ing from Segment 11 to 90, with a staggered stream interval of 10 segments. Note that Group 1(2) can contain data segments running from 1 to 90, although it may seem to be redundant. Accordingly, the frame period of Group 1(2) streams is 80 segments or 8 minutes, and this is the coarserjump frame period allowing the user to perform a coarse-jump interactive when the DDVR is connecting to the Group I data streams. Group II streams of Configuration 5 are identical to the Group II streams of the other configurations. In this particular example, each of the Group II streams starts from Segment 1 and going all the way to the end of the entire content. The arrangement of the streams and segments are shown in Figure 10.
With this hierarchical arrangement of streams and segments, it can be seen that the user can start at any time with a start-up latency of one segment (6 seconds in this example). Furthermore, users can coarse jump at any time within the start-up period, the time when the DDVR connects to the Group I streams. The start-up period is preferably defined to be the time within the first Group II stream interval (that is, from the 0-minute point to the 9-minute point) as in previous configurations. Each coarse jump is 1 minute apart from each other, which is determined by the coarse-jump frame period. Thus, the users can skip the headers using this arrangement. The total number of streams needed for holding a two-hour content in the particular example shown in Figure 10 is 30.
Although Figure 10 only shows the combination of Configurations 3 and 1 in
Group I data streams, it should be obvious to those skilled in the art that the following combinations are also possible: a. Configurations 4 and 1 b. Configurations 3 and 2 c. Configurations 4 and 2
The number of Group 1(1) data streams required, i.e. A, may be determined by
taking E as — in configurations 3 and 4. That is, if Configuration 3 is used in Group
N
1(1), there should be A data streams in Group 1(1) such that FA ≥ 2E . If
Configuration 4 is used, then As m Configuration 4, C, the total number of data segments to be transmitted in Group 1(1), preferably equals to E. The same considerations on the number of data streams required as in Configurations 3 and 4 may also be applicable to Group 1(1).
The decision regarding which combination to deploy should again be left as an option to the service provider.
Alternative Arrangements of Configurations 1, 2, and 3
For a NOD system built to serve a large number of users, the anti-latency data streams as described above may be preferred to be generated continuously such that these streams present in the system continuously, or at least during the prime time (let say, 6-11pm), for users to tap into. On the other hand, if there are relatively few users in the system, say several thousand of users, or the particular program being delivered is not requested very frequently, some further bandwidth may be saved if the anti- latency data streams are generated upon request of the users. This alternative approach may be beneficial to Configurations 1, 2, and 3. These are shown in Figures 14, 15, and 16. In these figures, the data segments in grey represent those data segments or data streams that are "turned-on" upon requests from the users.
For Configuration 1, each of the Group I anti-latency data streams is still staggered by an anti-latency stream interval T. However, as described above, not all of the Group I anti-latency data streams may present or "turned on" at all times.
Instead, they are generated upon requests from the users, and such requests are
"batched" within T. It means that if the user raises a request for said data within an anti-latency stream interval, the anti-latency data stream are generated at the next earliest anti-latency stream interval. As an example, referring to Figure 14, consider users request the data time equals to 2T, 3T, and 167 In this context, it means that users request the data between the interval IT to IT, IT to 3T, and 157 to 167/, respectively. Accordingly, in this example, only streams 2, 3, and 16 are generated, or "turned on", in a snap shot of the data streams of the system, while streams 1 and 4-15 are "turned off. As shown in Figure 14, it may seem that the resulting Group I data streams do not have a regular stream interval.
This concept may also be extended to Configurations 2 and 3.
Accordingly, in Configuration 2, not all of the leading data segment in the leading data stream, nor all of the finishing data streams may be "turned on" at all times. They are "turned on" upon requests from the users. An example is illustrated in Figure 15. It should be obvious to one skilled in the art that each of the leading data segment relates to a corresponding finishing data stream, and this may assist in achieving the goal by ordinary programming technique. The corresponding finishing data stream should also be generated at the time when the leading data segment is generated.
Similarly, in Configuration 3, not all of the ¥m segments distributed in the Group I data streams may be "turned on" at all times. An example is illustrated in Figurelό. Again, it should obvious to one skilled in the art the relationship among the group data segments, which may assist in achieving the goal by ordinary programming technique. All of the corresponding Fm segments should be generated at the appropriate time when the client raises the request. Specifically, subsequent F(,„+i) segments should be generated before all data in the preceding F,„ segment is received by the client.
Further, once the DDVR has merged with the Group II data streams, the anti- latency data streams can be terminated to further minimize the bandwidth usage.
As one of the basic requirements in Configuration 4 is that the user should be able to be connected to all of the Group I data streams, this "on-demand" approach does not appear to be applicable to Configuration 4.
Although this alternative approach may seem to save some additional bandwidth in comparison with the original Configurations, they may be less preferred due to several reasons. First, it may increase the workload and the processing requirement on the server side, and the complexity in programming and implementation. Second, this may lead to the overload ofthe resulting system if care is not taken at the design stage in allocating the required bandwidth. Third, this alternative approach will in fact become the original Configurations when the number of requests from the user is large.
Additional Features of Individual Data Segments
To facilitate the change over of the streams without incurring substantial loss of data during the transition, the beginning of each data segment, which can be termed the head portion, may contain duplicated data appearing in the tail portion of the immediate preceding segment. The amount of data to be carried in the duplicated portion may be T' (normalized with respect to the data rate ofthe stream), where T' is the delay that may incur during the change over of the streams. Typically, T' may be in the order of 10 - 20 milliseconds.
IVOD System Requirements There are several system requirements: a. The server needs to generate the appropriate multi-streams in patterns that have been illustrated in any one of Configurations 1 to 5 or such patters as may be designed. b. The distribution network should have sufficient capacity to carry all the required streams to the end user DDVR. c. The end user DDVR should have sufficient bandwidth, buffer and processing capability to handle the multi-streams. The DDVR should also have sufficient storage to buffer at least one Group II stream interval of data from the multi-streams.
These factors may affect the service provide in choosing which configuration to deploy.
Concept of Diskless DNR
Generally, the receiver DDVR may have a processor for raising request for the content, and a connector for connecting the Group I and II data streams.
For Configurations 1 and 2, it may be necessary for the DDVR to include a buffer for buffering the received Group I data streams. For Configurations 3 and 4, the DDVR should include a buffer for buffering the data received from Group I data streams. The processor will then also be responsible for processing the data to put the data in a proper order.
With the multi-streaming concept, the receiving device, the receiver, at the user end may not need to have any hard disk storage. The only memory or buffer needed at the STB, the client/receiver, may be the RAM (random-access memory) to buffer one stream interval equivalent of data. Assuming a stream interval of 8 minutes, this requires roughly 60 MB of RAM for a 1 Mb/s MPEG-4 stream. This technique can be contrasted with many NOD techniques that require a large hard disk storage (sometimes as large as 60 GB) at the STB. Therefore, this TvOD system also appears to the users like a diskless DVR. However, the system provider may choose to provide addition storage to the users in the form of hard disk or other non- volatile medium or use such other equipment as may be necessary to buffer and receive the data.
It should be further noted that there might be several options for the DDVR. First, the DDVR may be configured such that it plays the received data at a slower rate than the transmission rate of the data. The transmission rate may be
expressed in — under the condition that each data segment contains same amount of
data. In such cases, the DDVR may be required to have a larger buffer size to accommodate the un-received data.
Secondly, the DDVR may be configured to contain or pre-fetch at least a portion of the data in the Group I data streams, i.e. the leading portion of the data to be transmitted, for a certain period of time in its local buffer. Such data may be termed "pre-fetched data". If desired, the pre-fetched data may contain all of the data contained in the Group I data streams provided that the DDVR has adequate buffer size. In one extreme, the content ofthe data to be transmitted may be refreshed every day for video data, or more than once per day. In this particular example, it may be necessary for the pre-fetched data to be refreshed every day. The refresh time may be set at any desired value that may range from one day to even one year. It may be preferable to refresh the pre-fetched data during an off-peak period, like after midnight (for instance, from 01:00-06:00), or between 10:00 to 15:00, wherein the network activities resulting from clients' requests may be at a minimum. This process may be initiated by the anti-latency signal generator, the interactive signal generator, or by the client itself by a routine call procedure. In doing so, the latency time and the total number of data streams required in the network may be further reduced. This may be particularly important for VOD systems transmitting a large number of sets of data.
Trade-off of Space-Time-Bandwidth
There is a trade-off relationship for different configurations of the IVOD systems of this invention among buffer storage at DDVR (space), start-up latency (time) and streams (transmission bandwidth) required. This is shown in Table 3 and further illustrated in Figure 13.
In Figure 13, the Vertex 1 maybe realised as current VOD systems with all the data being sent and then stored in the STB, whether the client raises a request for the data or not. In such a case, the STB should have a relatively large buffer size. This may increase the manufacturing costs ofthe STB. Vertex 2 may represent the systems as described in Configurations 1-5. Under such a configuration, the requirement on the STB may be minimal while the system may be more demanding on the bandwidth. Vertex 3 may represent a hybrid system of Vertexes 1 and 2.
The decision on which "Vertex" to choose may be a matter of design choice depending on various factors including the bandwidth available, the specification of the STB, local requirements on latency and interactivity, and so on.
Table 3. Tradeoff among Buffer Storage (Space). Startup Latency (Time) and Streams (Transmission Bandwidth) Required Application to cable, satellite and terrestrial broadcasting systems
The IVOD systems of this invention may find immediate applications in existing cable TV, terrestrial broadcasting, and satellite broadcasting systems. With very little modification on the existing infrastructure, the non-interactive broadcasting, or NVOD systems may be converted into an IVOD system. Both analogue and digital transmission systems can take advantage of the multi-streaming concept. However, the discussions below will only describe system configurations for digital transmission systems.
In these digital broadcasting systems, the RF transmission bands are usually divided into 6 MHz (NTSC) or 8 MHz (PAL) channels. There can be over a hundred channels in cable TV, terrestrial or satellite broadcasting system. Figure 11 shows a typical system configuration for this multi-streaming system. It is very similar to existing broadcasting system. Only the transmission unit at the head end, which may be called an anti-latency device, and reception unit at the user end, the client/receiver, may need to be modified. At the head end, instead of sending analog signals in each channel, digital signals such as QAM are transmitted. Typically, one can put in 30 - 40 Mb/s into an RF channel. Assuming a 2-hour content, one can first use MPEG-4 or other compression algorithms to convert the analog signal into a digital stream with a bit rate of roughly 1 Mb/s. Using the Fibonacci dual-streaming (Configuration 3) or the optimal harmonic multi-streaming INOD concept (Configuration 4), one can place 30 to 40 streams of the IVOD streams into a single RF channel. The contents are put into different RF channels according to the PAL / ΝTSC / SECAM standard to maintain compatibility with the existing broadcasting system, and each RF channel can contain a few hours of contents.
At the user end, the set top box should be RF-tuned to the particular RF channel of interest. Then the cable modem would filter out the 30 - 40 Mb/s digital streams and decode two streams at a time (for Fibonacci dual-streaming systems) or decode all the harmonic multi-streams (for harmonic multi-streaming systems). Figure 12 shows the block diagram of the STB / cable modem. The STB / cable modem is similar to other STB / cable modems except for its processing unit which can process at least 2 multi-streams simultaneously rather than a single stream. The decoded streams would be buffered in the STB and the content would be reconstructed according to the sequence number of the segments. With the hundreds of channels available in a typical broadcasting system, this translates to over 200 hours or more of fully interactive programs available to an infinite number of users.
While the preferred embodiment of the present invention has been described in detail by the examples, it is apparent that modifications and adaptations of the present invention will occur to those skilled in the art. It is to be expressly understood, however, that such modifications and adaptations are within the scope of the present invention, as set forth in the following claims. Furthermore, the embodiments of the present invention shall not be interpreted to be restricted by the examples or figures only.

Claims

1. A system for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client, including: at least one anti-latency signal generator for generating at least one anti-latency data stream containing at least a leading portion of data for receipt by a client; and at least one interactive signal generator for generating interactive data stream containing at least a remaining portion of said data for the client to merge into after receiving at least a portion of an anti-latency data stream.
2. The system of Claim 1, wherein: said data is fragmented into K segments each requiring a time T to transmit over the network; the anti-latency data streams includes M anti-latency data streams; and the interactive data streams includes N interactive data streams.
3. The system of Claim 1 , wherein: - the anti-latency data streams contains the leading portion of said data only; the interactive data streams contains a whole set of said data.
4. The system of Claim 2, wherein: - each of the M anti-latency data stream contains substantially identical data repeated continuously within said anti-latency data stream, and wherein each successive anti-latency data stream is staggered by an anti-latency time interval; and each of the N interactive data stream repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval.
5. The system of Claim 4, wherein: each of the M anti-latency data stream has J segments; and the anti-latency time interval > T.
6. The system of Claim 4, wherein the interactive time interval > JT.
7. The system o f Claim 5 , wherein M≥ J.
8. The system of Claim 7, wherein M = J.
The system of Claim 6, wherein N > —
JT
10. The system of Claim 9, wherein N = — .
JT
11. The system of Claim 8 or 10, wherein M = N - J
T
12. The system of Claim 4, wherein each of the N interactive data streams contains the whole set of said data having K segments.
13. The system of Claim 4, wherein each of the N interactive data streams contains the remaining portion of said data only.
14. The system of Claim 4, wherein: the client is connected to any one of the M anti-latency data streams when the client raises a request for said data; and the client is connected to any one ofthe N interactive data streams.
15. The system of Claim 2, wherein the anti-latency data streams includes:
I. a leading data stream containing at least one leading segment of the leading portion of said data being repeated continuously within the leading data stream; and II. a plurality of finishing data streams, each of the finishing data streams:
• containing the rest of the leading portion of said data; and
• being repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by an anti-latency time interval; each of the N interactive data streams is repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval.
16. The system of Claim 15 , wherein each ofthe finishing data stream has J segments; and the anti-latency time interval > T.
17. The system of Claim 15, wherein the interactive time interval > JT.
18. The system of Claim 16, wherein M ≥ — + 1 .
2
19. The system of Claim 18 , wherein M = — + 1 .
D
20. The system of Claim 17, wherein N > — .
JT
Ό
21. The system of Claim 20, wherein N = —
JT
22. The system of Claim 19 or 21 , wherein J = 2 .
23. The system of Claim 15, wherein each of the N interactive data streams contains the whole set of said data having K segments.
24. The system of Claim 15, wherein each of the N interactive data streams contains the remaining portion of said data only.
25. The system of Claim 15 , wherein: the client is connected to the leading data stream when the client raises a request for said data; the client is subsequently connected to any one of the finishing data streams; and the client is connected to any one ofthe N interactive data streams.
26. The system of Claim 2, wherein: each of the N interactive data stream is repeated continuously within said interactive data stream, and wherein each successive interactive
KT data stream is staggered by an interactive time interval = ;
N - the M anti-latency data streams [1 to M] are generated such that
• an mth anti-latency data stream has Fm segments, wherein Fm is an mth Fibonacci number; and
• the Fm segments are repeated continuously within the mth anti- latency data stream.
27. The system of Claim 26, wherein: the client is connected to at least the mth and (m+l)th anti-latency data streams when the client raises a request for said data; the data in at least the mth and (m+l)th anti-latency data streams is buffered in the client; the client is subsequently connected to successive anti-latency data streams; and until all data in the leading portion is received by the client.
28. The system of Claim 27, wherein: the client is connected to any one ofthe N interactive data streams after all data in the leading portion is received by the client.
29. The system of Claim 26, wherein each of the N interactive data streams contains the whole set of said data having K segments.
30. The system of Claim 26, wherein each of the N interactive data streams contains the remaining portion of said data only.
2K
31. The system of Claim 26, wherein F^ ≥
N
32. The system of Claim 26, wherein m starts from 1.
33. The system of Claim 26, wherein m starts from 4 and the repeating 1 st, o2tid, and
3 >rd anti-latency data streams have the following configuration:
34. The system of Claim 2, wherein: each of the -N interactive data steams is repeated continuously within said interactive data stream, and wherein each successive interactive
KT data stream is staggered by an interactive time interval
N in the M anti-latency data streams,
I. the leading portion of said data contains [1 to] J leading data segments [labeled]; and
II. the leading data segments are distributed in the M anti-latency data streams such that an j leading segment is repeated by an anti- latency time interval < jT within the anti-latency data streams.
35. The system of Claim 34, wherein: the client is connected to all of the M anti-latency data streams when the client raises a request for said data; and the leading portion of said data in the M anti-latency data streams is buffered in the client.
36. The system of Claim 35 , wherein: the client is connected to any one of the N interactive data streams after all data in the leading portion is received by the client.
37. The system of Claim 34, wherein each of the N interactive data streams contains the whole set of said data having K segments.
38. The system of Claim 34, wherein each of the N mteractive data streams contains the remaining portion of said data only.
39. The system of Claim 34, wherein - .
40. The system of Claim 34 wherein six of the M anti-latency data streams containing the leading data segments are arranged as follows:
1 1 1 l|l|l|l|l ι|ι|ι|ι|ι|ι|ι|ι ι|ι|ι|ι|ι|ι|ι|ι l|ι|ι|ι|ι|l|l|ι l
2|4 2 8 2 4 2 lfi T 4 j 2 | 8 | 2 | 4 232 2 4 2 | S | 2 4 21(5 2|4 2 | 8 j 2 4 264 2|4
3 6 9 3 12|42| 3 | 6 1S| 3 |24 9 3 6 3 12|48 3|e|9 33(5 39 31 e 183 12 9|3|<5 3
5 |10|15|25| |5 |20| IE 5|lθ|3θ| |35|5 40|15| | |5|lθ|45|50 15 |20|55|25|δ0| 5 |ιo 151
7 |l4|2l|ll|l3 17|19| 7 2S|22|23|2fi|27 ll| 7 |14 29|l3|3l|33|34|7 |l7|21 ll|l9|37|3l| 7 |l4|l3|22 33|23|
38|4l| |43|44| |4fi|47 49 1 1 lsll52 1 |53| |54| | |5<5|57 |58| |59| |(31 |(52 63|
wherein those segments in blank contains any data.
41. The system of Claim 2, wherein the M anti-latency data streams contains the leading portion of said data; and further includes two batches of data streams being a 1st set of anti- latency data streams and a 2nd set of anti-latency data streams.
2. The system of Claim 41 , wherein: the 1st anti-latency data streams have A 1st anti-latency data streams [from 1 to A], wherein
I. an αth anti-latency data stream has Fσ segments, and Fa is an a Fibonacci number; and
II. the Fα segments are repeated continuously within the ath 1st anti- latency data stream the 2nd anti-latency data streams have B 2nd anti-latency data streams, wherein each of the B 2n anti-latency data streams contains substantially identical data repeated continuously within said 2nd anti- latency data stream, and wherein each successive 2nd anti-latency data stream is staggered by a coarse-jump frame period; such that the client can perform a coarse-jump function when the client is connected to the B 2nd anti-latency data stream.
43. The system of Claim 42, wherein: the client is connected to at least the αth and (a+l)th 1st anti-latency data streams when the client raises a request for said data; the data in at least the ath and (α+l)th 1st anti-latency data streams is buffered in the client; the client is subsequently connected to successive 1st anti-latency data streams; until all data in the A 1st anti-latency data streams is received by the client.
44. The system of Claim 43, wherein: the client is connected to any one ofthe B 2nd anti-latency data streams after all data in the 1st anti-latency data streams is received by the client; and the client is connected to anyone ofthe N interactive data streams after all data in the connected B 2" anti-latency data stream is received by the client.
45. The system of Claim 42, wherein each of the N interactive data streams contains the whole set of said data having K segments.
46. The system of Claim 42, wherein each of the N interactive data streams contains the remaining portion of said data only.
47. The system of Claim 42, wherein said coarse-jump frame period includes E data segments, and F^ > 2E .
48. The system of Claim 42, wherein a starts from 1.
49. The system of Claim 42, wherein a starts from 4 and the repeating 1st, 2nd, and 3rd anti-latency data streams have the following configuration:
111|1|1|1|1|1|1 111|1|1|1|111|1 1|1|1|1|111|1|1 1|1|1|1|1|1|1|1 1|1 2|3|2|3|2|3|2|3 |3|2|3|2|3|2[3~2]3|2|3|2|3|2|3 T |2|3|2J3|2|3 | 3 4151617141516171415161714151(517 4l5l6l7l4l5lfil7 4|5|6|7|4|S|6|7|4|5
50. The system of Claim 41, wherein: - the 1st anti-latency data streams have A 1st anti-latency data streams
[from 1 to A], wherein
I. an αth anti-latency data stream has Ffl segments, wherein Fa is an a Fibonacci number; and
II. the Fα segments are repeated continuously within the αth 1st anti- latency data stream the 2nd anti-latency data streams have B 2nd anti-latency data stream including
I. a leading data stream containing at least one leading segment of the leading portion of said data being repeated continuously within the leading data stream; and
II. a plurality of finishing data streams, each of the finishing data streams:
• containing the rest ofthe leading portion of said data; and • being repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by a coarse-jump frame period such that the client can perform a coarse-jump interactive function when the client is connected to the B 2nd anti-latency data streams.
51. The system of Claim 50, wherein: the client is connected to at least the ath and (α+l)th 1st anti-latency data streams when the client raises a request for said data; - the data in at least the ath and (α+l)th 1st anti-latency data streams is buffered in the client; the client is subsequently connected to successive 1st anti-latency data streams; until all data in the A 1st anti-latency data streams is received by the client.
52. The system of Claim 51 , wherein: the client is connected to the leading data stream after all data in the 1st anti-latency data streams is received by the client; the client is subsequently connected to any one of the finishing data streams; and the client is connected to anyone of the N interactive data streams after all data in the B 2nd anti-latency data streams is received by the client.
53. The system of Claim 50, wherein each of the N interactive data streams contains the whole set of said data having K segments.
54. The system of Claim 50, wherein each of the N interactive data streams contains the remaining portion of said data only.
55. The system of Claim 50, wherein said coarse-jump frame period includes E data segments, and F^ > 2E .
56. The system of Claim 50, wherein a starts from 1.
7. The system of Claim 50, wherein a starts from 4 and the repeating 1st, 2nd, and 3rd data streams of the A 1st anti-latency data streams have the following configuration:
58. The system of Claim 41 , wherein: the 1st anti-latency data streams have A 1st anti-latency data streams, wherein,
I. the A 1st anti-latency data streams contains [1 to] C 1st data segments; and
II. the 1st data segments are distributed in the A 1st anti-latency data streams such that an cth leading segment is repeated by an anti- latency time interval < cT within the A 1st anti-latency data streams; the 2n anti-latency data streams have B 2nd anti-latency data streams, wherein each of the B 2nd anti-latency data streams contains substantially identical data repeated continuously within said 2nd anti- latency data stream, and wherein each successive 2n anti-latency data stream is staggered by a coarse-jump frame period; such that the client can perform a coarse-jump interactive function when the client is connected to the B 2nd anti-latency data stream.
59. The system of Claim 58, wherein: the client is connected to all ofthe A 1st anti-latency data streams when the client raises a request for said data; and data in the A 1st anti-latency data streams is buffered in the client until all data in the A 1st anti-latency data streams is received by the client.
60. The system of Claim 59, wherein: the client is connected to any one of the B 2" anti-latency data streams after all data in the 1st anti-latency data streams is received by the client; and the client is connected to anyone of the N interactive data streams after all data in the connected B 2n anti-latency data stream is received by the client.
61. The system of Claim 58, wherein each of the N interactive data streams contains the whole set of said data having K segments.
62. The system of Claim 58, wherein each of the N interactive data streams contains the remaining portion of said data only.
63. The system of Claim 58, wherein said coarse-jump frame period includes E
data c rns, and ^ ≥ ∑ f A .
64. The system of Claim 58, wherein six of the A 1st anti-latency data streams are arranged as follows:
wherein those segments in blank contains any data.
65. The system of Claim 41 , wherein: the 1st anti-latency data streams have A 1st anti-latency data streams, wherein, I. the A 1st anti-latency data streams contains C 1st data segments; and II. the data segments I are distributed in the A 1st anti-latency data streams such that an cth leading segment is repeated by an anti- latency time interval < cT within the A 1st anti-latency data streams; the 2nd anti-latency data streams have B 2nd anti-latency data stream including
I. a leading data stream containing at least one leading segment of the leading portion of said data being repeated continuously within the leading data stream; and
II. a plurality of finishing data streams, each of the finishing data streams:
• containing the rest of the leading portion of said data; and
• being repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by a coarse-jump frame period such that the client can perform a coarse-jump interactive function when the client is connected to the B 2nd anti-latency data streams.
66. The system of Claim 65, wherein: the client is connected to all ofthe A 1st anti-latency data streams when the client raises a request for said data; and data in the A 1st anti-latency data streams is buffered in the client until all data in the A 1st anti-latency data streams is received by the client.
67. The system of Claim 66, wherein: - the client is connected to the leading data stream of the B 2nd anti- latency data streams after all data in the 1st anti-latency data streams is received by the client; the client is subsequently connected to any one of the finishing data streams; and - the client is connected to anyone ofthe N interactive data streams after all data in the B 2nd anti-latency data stream connected in step F is received by the client.
68. The system of Claim 65, wherein each of the N interactive data streams contains the whole set of said data having K segments.
69. The system of Claim 65, wherein each of the N interactive data streams contains the remaining portion of said data only.
70. The system of Claim 65, wherein said coarse-jump frame period includes E
data segments, and •
71. The system of Claim 67, wherein six ofthe A 1st anti-latency data streams are arranged as follows:
wherein those segments in blank contains any data.
72. The system of any one of Claims 2, 4, 15, 26, 34, 41, 42, 50, 58, or 65, wherein each ofthe K data segments contains a head portion and a tail portion, and the head portion contain a portion of data of the tail portion of the immediate preceding segment to facilitate merging of the K data segments when received by the client.
73. The system of any one of Claims 2, 4, 15, 26, 34, 41, 42, 50, 58, or 65, wherein at least a portion of data in the leading portion is pre-fetched in the client.
74. A system for transmitting data over a network to at least one client including a signal generator for fragmenting said data into K data segments each requiring a time T to transmit over the network, wherein each of the K data segments contains a head portion and a tail portion, and the head portion contains a portion of data of the tail portion of the immediate preceding segment to facilitate merging ofthe K data segments when received by the client.
75. A system for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client, including: - at least one anti-latency signal generator for generating at least one of anti-latency data stream containing at least a leading portion of data for receipt by the client; a buffer in the client for pre-fetching the leading portion in the client as pre-fetched data; and - at least one interactive signal generator for generating at least one interactive data stream containing at least a remaining portion of said data for the client to merge into the leading portion.
76. The system of Claim 75, wherein the pre-fetched data is refreshed during a refresh time period.
77. The system of Claim 76, wherein the refresh time period is an off-peak period.
78. The method of Claim 76, wherein pre-fetched data is refreshed once per day.
79. A system for transmitting data over a network to at least one client including at least one anti-latency signal generator for generating a plurality of anti- latency data streams, the anti-latency data streams include: a leading data stream containing at least one leading segment of a leading portion of said data being repeated continuously within the leading data stream; and a plurality of finishing data streams, each of the finishing data streams: • containing at least the rest of the leading portion of said data; and • repeated continuously within said finishing data stream, and wherein each successive finishing data stream is staggered by an anti-latency time interval.
80. The system of Claim 79, wherein: the client is connected to the leading data stream when the client raises a request for said data; and the client is subsequently connected to any one of the finishing data streams.
81. The system of Claim 79, wherein said data is fragmented into K segments each requiring a time T to transmit over the network, and the anti-latency time interval > T.
82. A system for transmitting data over a network to at least one client including at least one anti-latency signal generator for generating a plurality of anti- latency data streams, wherein the anti-latency data streams include:
M anti-latency data streams from [1 to M], wherein an rn h anti-latency data stream has Fm segments, and Fm is an m Fibonacci number; and wherein said Fm segments are repeated continuously within the mth anti-latency data stream.
83. The system of Claim 82, wherein: the client is connected to at least the wth and (m+l)th anti-latency data streams when the client raises a request for said data; the data in at least the wth and (m+l)th anti-latency data streams is buffered in the client; the client is subsequently connected to successive anti-latency data streams; and until all data is received by the client.
84. The system of Claim 82, wherein m starts from 1. :SΛV0J0J SB p9§UBXIB 9JB S1U9UI§9S B}Bp gurpBST.9qi guiUlBUlOO
SUIB9J1S Brep Λou9:re[-μuB JA[ 9 4 o XIS u jgqΛ- '98 uii TQ o trojsΛs 9q "88
" JB piBS JOJ JS9nb9I B S9S1BJ }U9ip 9 j U9qΛ 9 ι ui Brep pres pUB .SUIB9 S J θU91B|-μUB Jή[ 9ψ JO JJB 01 p9p9UU00 SI lrøip 9m
:ui9i9 Λ '98 IHI Q JO trøjsΛs 9qχ 'i
SI
SUIB9X1S
Brep λou9} |-μuB gqi ππniM. > reΛjg i 9tuμ Aou9Bj-μιre UB Aq p9 9d9J SI pgtu gs gui gj Of UB rβ i U US SUIB9J1S BJBp λθU9JB[-μUB γ gqi ui pgmqμjsip iB sruguiggs jBp Λou9}BτπμuB gqi u ig Λ 'smguiggs Brep Λougj j-μu _y [oi j_] Surareiuoo SIUB9JQS p Λougrej-μiiB jq 01
.gpnpui suregns ^ Aou9j τ-μuB gqi u ig Λ 'suiBθ is j p λoug^ j-μuB jo Λjireinjd
B §UμBJ9U9§ IOJ 101BJ9U9§ jBIlglS ΛorøJBT-μUB 9U0 JSB9J re gurpttpui 'ψθΛ\ U
9tn J9ΛO rtrasireii oi ; 9iuμ B guumbgj U BΘ siu9tu 9s % OJUI 9iu9iuS J gupq re piBS 'jU9ip 9U0 JSB9J J OI ψOΛVpU B J9Λ0 BJ y "9
:uoμBingtuoo gui ojoj 9q:ι 9ΛBq suπpxis Brep Aou9j -μuB c puB ' z s\ uμBgdgi gqi uB uioi sμ js i uiarøψΛ l xaι^\ J° ro is^s 3lLL '£8
Z.ZS00/Z0N3/I3d mεio/εo OΛV
wherein those segments in blank contains any data.
89. A receiver for receiving data being transmitted over a network to at least one client according to Claim 2, including: a processor for raising a request for said data; and at least one connector for connecting the client to the M anti-latency data streams and receiving data in the M anti-latency data streams.
90. The receiver of Claim 89, wherein: the connector is connected to the N interactive data streams after all data in the M anti-latency data streams is received by the receiver.
91. The receiver of Claim 89, wherein data in the leading portion is received sequentially.
92. The receiver of Claim 89, wherein the receiver connects to at least two of the anti-latency data streams simultaneously.
93. The receiver of Claim 92 further including: a buffer for buffering data in the two anti-latency data streams connected to the client that is received by the client sequentially.
94. The receiver of Claim 93, wherein the buffer includes random access memory and computer hard disk.
95. The receiver of Claim 93, wherein the buffer consists of random access memory.
96. The receiver of Claim 89, wherein the receiver connects to all of the anti- latency data streams simultaneously.
97. The receiver of Claim 96 further including: a buffer for buffering data in the anti-latency data streams connected in the client; and wherein the processor rearranges the buffered data according to a proper sequence.
98. The receiver of Claim 97, wherein the buffer includes random access memory and computer hard disk.
99. The receiver of Claim 97, wherein the buffer consists of random access memory.
100. The receiver of Claim 89, wherein at least a portion of data in the M anti- latency data streams is pre-fetched in the client as pre-fetched data.
101. The receiver of Claim 100, wherein the pre-fetched data is refreshed during a refresh time period.
102. The receiver of Claim 101, wherein the refresh time period is 01 :00-06:00.
103. The receiver of Claim 101, wherein the refresh time period is 10:00-15:00.
104. A receiver for receiving data being transmitted over a network to at least one client, wherein said data includes a leading portion and a remaining portion, and the remaining portion is transmitted by at least one interactive data stream, including: a buffer for pre-fetching the leading portion in the client as pre-fetched data; and - a processor for merging the pre-fetched data to the remaining portion.
105. The receiver of Claim 104, wherein the pre-fetched data is refreshed during a refresh time period.
106. The receiver of Claim 105, wherein the refresh time period is an off-peak period.
107. The receiver of Claim 105, wherein pre-fetched data is refreshed once per day.
108. A system for transmitting data over a network to at least one client having a latency time to initiate transmission of said data to the client, including: at least one anti-latency signal generator for generating at least one of anti-latency data stream containing at least a leading portion of said data for receipt by the client; and at least one interactive signal generator for generating at least one interactive data stream containing at least a remaining portion of said data for the client to merge into after receiving at least a portion of an anti-latency data stream, wherein: the leading portion of said data • can be generated at regular anti-latency stream intervals; and
• is generated at the next earliest anti-latency stream interval after at least one client raises a request for said data.
109. The system of Claim 108, wherein: - said data requiring a time R to be transmitted over the network is fragmented into K segments each requiring a time T to transmit over the network; the anti-latency data streams includes M anti-latency data streams, wherein each ofthe M anti-latency data stream • contains substantially identical data
• can be generated at regular anti-latency time intervals; and
• are generated at the next earliest anti-latency stream interval after the client raises a request for said data; the interactive data streams includes N interactive data streams, wherein each of the N interactive data stream is repeated continuously within said interactive data stream, and each successive interactive data stream is staggered by an interactive time interval.
110. The system of Claim 109, wherein: each of the M anti-latency data stream has J segments; and the anti-latency time interval > T.
111. The system of Claim 110, wherein the interactive time interval > JT.
112. The system of Claim 111, wherein ≥ J.
p
113. The system of Claim 110, wherein N > — .
JT
114. The system of Claim 113, wherein M =N = J = —
115. The system of Claim 109, wherein each of the N interactive data streams contains the whole set of said data having K segments.
116. The system of Claim 109, wherein each of the N interactive data streams contains the remaining portion of said data only.
117. The system of Claim 109, wherein: - the client is connected to the M anti-latency data stream generated for the client when the client raises the request for said data; the client is connected to any one ofthe N interactive data streams; and the M anti-latency data stream generated for the client is terminated after the client is connected to one ofthe N interactive data streams.
118. The system of Claim 108, wherein: said data requiring a time R to be transmitted over the network is fragmented into K segments each requiring a time T to transmit over the network; the anti-latency data streams includes M anti-latency data streams including:
I. a leading data stream that
• contains at least one leading segment of the leading portion of said data
• can be generated at regular anti-latency time • are generated at the next earliest anti-latency stream interval after the client raises a request for said data;
II. a plurality of finishing data streams, wherein each of the finishing data streams that:
• contains the rest of the leading portion of said data; • corresponds to one of the leading segments; and
• are generated when the corresponding leading segment is generated; the interactive data streams includes N interactive data streams, wherein each of the N interactive data streams is repeated continuously within said interactive data stream, and each successive interactive data stream is staggered by an interactive time interval.
119. The system of Claim 118, wherein each ofthe finishing data stream has J segments; and - the anti-latency time interval > T.
120. The system of Claim 119, wherein the interactive time interval > JT.
121. The system of Claim 119, wherein M ≥ — + 1 .
2
n
122. The system of Claim 120, wherein N ≥ —
JT
123. The system of Claim 120, wherein J = XK .
124. The system of Claim 118, wherein each of the N interactive data streams contains the whole set of said data having K segments.
125. The system of Claim 118, wherein each of the N interactive data streams contains the remaining portion of said data only.
126. The system of Claim 118, wherein: the client is connected to the leading data segment generated for the client when the client raises the request for said data; the client is subsequently connected to the corresponding finishing data stream; - the client is connected to any one of the N interactive data streams; and the leading data segment and the corresponding finishing data stream generated for the client is terminated after the client is connected to one of the N interactive data streams.
127. The system of Claim 108, wherein: said data requiring a time R to be transmitted over the network is fragmented into K segments each requiring a time T to transmit over the network; the interactive data streams includes N interactive data streams, wherein each of the N interactive data stream is repeated continuously within said interactive data stream, and wherein each successive interactive data stream is staggered by an interactive time interval
_ KT
~ N '' the anti-latency data streams includes M anti-latency data streams, such that
• an mlh anti-latency data stream has Fm segments, wherein Fm is an m' Fibonacci number; • the Fm segments can be generated at regular anti-latency stream intervals;
• the first Fm segment is generated at the next earliest anti-latency stream interval when the client raises a request for said data; and • subsequent F(„,+/; segments are generated before all data in the preceding F,„ segment is received by the client.
128. The system of Claim 127, wherein: the client is connected to at least the mth and (m+l)th anti-latency data streams when the client raises a request for said data; the data in at least the wth and (w+l)th anti-latency data streams is buffered in the client; the client is subsequently connected to successive anti-latency data streams before all data in the leading portion is received by the client.
129. The system of Claim 127, wherein: the client is connected to any one of the N interactive data streams after all data in the leading portion is received by the client; and the M anti-latency data streams is terminated after the client is connected to one ofthe N interactive data streams.
130. The system of Claim 127, wherein each of the N interactive data streams contains the whole set of said data having K segments.
131. The system of Claim 127, wherein each of the N interactive data streams contains the remaining portion of said data only.
K 132. The system of Claim 127, wherein F^ > .
133. The system of Claim 127, wherein m starts from 1.
134. The system of Claim 127, wherein m starts from 4 and the repeating 1st, 2nd, and 3rd anti-latency data streams have the following configuration: l|l|l|l|ljljl|l 1|1|1|1|1|1|1|1 1|1|1|1|1|1|1|1 111|1|1|1|1|1|1 111 2|3|2|3|2|3|2|3"2|3|2|3|2|3|2|3~2|3|2|3|2|3|2|3 "2|3|2|3|2|3|2|3 2]~3 4l5l6l7l4l5lfil7 4l5lfil7 I 4 I 5 I 617 4 I 5 I 6 I 7 I 4 I 5 I 6 I 7 4 I 5 I 6 I 7 I 4 I 5 I 6 I 7 I 4 I 5
135. An anti-latency signal generator for generating a plurality of anti-latency data streams to transmit data over a network to at least one client, wherein the anti- latency data streams include: a leading data stream that
• contains at least one leading segment of the leading portion of said data
• can be generated at regular anti-latency time intervals; and
• are generated at the next earliest anti-latency stream interval after the client raises a request for said data; a plurality of finishing data streams, each ofthe finishing data streams:
• contains the rest ofthe leading portion of said data;
• corresponds to one ofthe leading segments; and
• are generated when the corresponding leading segment is generated.
136. The anti-latency signal generator of Claim 135, wherein: the client is connected to the leading data stream when the client raises a request for said data; and the client is subsequently connected to the corresponding finishing data stream.
137. The anti-latency signal generator of Claim 135, wherein said data is fragmented into K segments each requiring a time T to transmit over the network, and the anti-latency time interval > T.
138. An anti-latency signal generator for generating M anti-latency data streams to transmit data over a network to at least one client, wherein an m{ anti-latency data stream has F,„ segments, and F,„ is an tnl
Fibonacci number; the Fm segments can be generated at regular anti-latency stream intervals; the first Fm segment is generated at the next earliest anti-latency stream interval when the client raises a request for said data; and subsequent ¥(,„+i) segments are generated before all data in the preceding Fm segment is received by the client.
139. The anti-latency signal generator of Claim 138, wherein: the client is connected to at least the th and (m+l)th anti-latency data streams when the client raises a request for said data; the data in at least the th and (m+l)th anti-latency data streams is buffered in the client; - the client is subsequently connected to successive anti-latency data streams until all data in the leading portion is received by the client.
140. The anti-latency signal generator of Claim 138, wherein m starts from 1.
141. The anti-latency signal generator of Claim 138, wherein m starts from 4 and the repeating 1st, 2nd, and 3rd anti-latency data streams have the following configuration:
■Ljl|-L|X|l|l[l|X 1111111 j 1 [ 11111 1|1|1|1|1|1|1|1 1|1|1|1|1|1|1|1 111 2|3|2|3|2|3|2|3 '2|3|2|3|2|3J |3 ~2|3|2|3|2|3|2|3 2|3|2|3|2|3|2|3 TTj 4l5lβl7l4l5l6l7l4l5l6l7l4l5l6l7Ul5.l6l7l4l5lβl7 4lsldl7l4l5lβl7 4 I 5
EP02754152A 2001-07-31 2002-07-29 System for delivering data over a network Withdrawn EP1433324A4 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US09/917,639 US7574728B2 (en) 2001-07-31 2001-07-31 System for delivering data over a network
US917639 2001-07-31
US954041 2001-09-18
US09/954,041 US7200669B2 (en) 2001-07-31 2001-09-18 Method and system for delivering large amounts of data with interactivity in an on-demand system
PCT/CN2002/000527 WO2003013124A2 (en) 2001-07-31 2002-07-29 System for delivering data over a network

Publications (2)

Publication Number Publication Date
EP1433324A2 true EP1433324A2 (en) 2004-06-30
EP1433324A4 EP1433324A4 (en) 2007-04-18

Family

ID=27129728

Family Applications (1)

Application Number Title Priority Date Filing Date
EP02754152A Withdrawn EP1433324A4 (en) 2001-07-31 2002-07-29 System for delivering data over a network

Country Status (7)

Country Link
EP (1) EP1433324A4 (en)
JP (1) JP2005505957A (en)
KR (1) KR100639428B1 (en)
CN (1) CN100477786C (en)
AU (1) AU2002322988C1 (en)
CA (1) CA2451901C (en)
WO (1) WO2003013124A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1433323A1 (en) * 2001-07-31 2004-06-30 Dinastech IPR Limited Method for delivering data over a network

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7574728B2 (en) 2001-07-31 2009-08-11 Dinastech Ipr Limited System for delivering data over a network
US7174384B2 (en) 2001-07-31 2007-02-06 Dinastech Ipr Limited Method for delivering large amounts of data with interactivity in an on-demand system
CN1228982C (en) * 2002-12-05 2005-11-23 国际商业机器公司 Channel combination method of VOD system
US6932435B2 (en) 2003-11-07 2005-08-23 Mckechnie Vehicle Components (Usa), Inc. Adhesive patterns for vehicle wheel assemblies
WO2006011270A1 (en) * 2004-07-27 2006-02-02 Sharp Kabushiki Kaisha Pseudo video-on-demand system, pseudo video-on-demand system control method, and program and recording medium used for the same
BRPI0520497A2 (en) 2005-08-26 2009-05-12 Thomson Licensing system and method on demand using dynamic transmission programming
CN101146211B (en) * 2006-09-11 2010-06-02 思华科技(上海)有限公司 Load balance system and method of VoD network
EP1914932B1 (en) * 2006-10-19 2010-12-15 Thomson Licensing Method for optimising the transmission of DVB-IP service information by partitioning into several multicast streams
EP2819364A1 (en) * 2013-06-25 2014-12-31 British Telecommunications public limited company Content distribution system and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0749241A2 (en) * 1995-06-15 1996-12-18 International Business Machines Corporation Fixed video-on-demand
WO2001024526A1 (en) * 1999-09-27 2001-04-05 Koninklijke Philips Electronics N.V. Scalable system for video-on-demand
EP1433323A1 (en) * 2001-07-31 2004-06-30 Dinastech IPR Limited Method for delivering data over a network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5822530A (en) * 1995-12-14 1998-10-13 Time Warner Entertainment Co. L.P. Method and apparatus for processing requests for video on demand versions of interactive applications
US6233017B1 (en) * 1996-09-16 2001-05-15 Microsoft Corporation Multimedia compression system with adaptive block sizes
JP3825099B2 (en) * 1996-09-26 2006-09-20 富士通株式会社 Video data transfer method and video server device
US6563515B1 (en) * 1998-05-19 2003-05-13 United Video Properties, Inc. Program guide system with video window browsing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0749241A2 (en) * 1995-06-15 1996-12-18 International Business Machines Corporation Fixed video-on-demand
WO2001024526A1 (en) * 1999-09-27 2001-04-05 Koninklijke Philips Electronics N.V. Scalable system for video-on-demand
EP1433323A1 (en) * 2001-07-31 2004-06-30 Dinastech IPR Limited Method for delivering data over a network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
L. GAO, Z-L ZHANG, D. TOWSLEY: "Catching and selective catching: efficient latency reduction techniques for delivering continuous multimedia streams" PROCEEDINGS OF THE SEVENTH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (PART 1), [Online] 1999, pages 203-206, XP002409472 New York, USA ISBN: 1-58113-151-8 Retrieved from the Internet: URL:http://portal.acm.org/> [retrieved on 2006-11-27] *
See also references of WO03013124A2 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1433323A1 (en) * 2001-07-31 2004-06-30 Dinastech IPR Limited Method for delivering data over a network
EP1433323A4 (en) * 2001-07-31 2007-04-18 Dinastech Ipr Ltd Method for delivering data over a network

Also Published As

Publication number Publication date
CA2451901A1 (en) 2003-02-13
CN100477786C (en) 2009-04-08
WO2003013124A2 (en) 2003-02-13
CA2451901C (en) 2010-02-16
EP1433324A4 (en) 2007-04-18
CN1535536A (en) 2004-10-06
AU2002322988B2 (en) 2007-11-15
JP2005505957A (en) 2005-02-24
KR20040041574A (en) 2004-05-17
WO2003013124A3 (en) 2003-05-15
KR100639428B1 (en) 2006-10-30
AU2002322988C1 (en) 2008-05-22

Similar Documents

Publication Publication Date Title
CA2451897C (en) Method for delivering data over a network
US7174384B2 (en) Method for delivering large amounts of data with interactivity in an on-demand system
AU2002322987A1 (en) Method for delivering data over a network
US6557030B1 (en) Systems and methods for providing video-on-demand services for broadcasting systems
US20020026501A1 (en) Decreased idle time and constant bandwidth data-on-demand broadcast delivery matrices
CA2451901C (en) System for delivering data over a network
AU2002322988A1 (en) System for delivering data over a network
Pâris et al. Limiting the Client Bandwidth of Broadcasting protocols for Videos on demand
US7574728B2 (en) System for delivering data over a network
US20020138845A1 (en) Methods and systems for transmitting delayed access client generic data-on demand services
JP2006509454A (en) Channel tapping in near video on demand system
CA2406715A1 (en) Methods for providing video-on-demand services for broadcasting systems
Thirumalai et al. Tabbycat: an inexpensive scalable server for video-on-demand
CA2428829A1 (en) Decreased idle time and constant bandwidth data-on-demand broadcast delivery matrices
WO2002086673A2 (en) Transmission of delayed access client data and demand
KR20040063795A (en) Transmission of delayed access client data and demand
EP1402331A2 (en) Methods and systems for transmitting delayed access client generic data-on demand services

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20040213

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LI LU MC NL PT SE SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1066674

Country of ref document: HK

A4 Supplementary search report drawn up and despatched

Effective date: 20070320

17Q First examination report despatched

Effective date: 20071116

REG Reference to a national code

Ref country code: HK

Ref legal event code: WD

Ref document number: 1066674

Country of ref document: HK

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20110201